Skip to content

AI

Let’s start with terminology; they’re large language models (LLMs), not “artificial intelligence”. This is an important place to start, because it gets at the heart of our approach to using LLMs with Steady.

When you ask yourself questions about when, where, and how you should leverage LLMs in your app, that word choice matters. Rephrasing “should we use AI to figure out a customer’s quarterly goals?” to “should we use a large language model to figure out a customer’s quarterly goals?” makes the correct answer crystal clear.

Where “AI” lets you fill in the blanks with whatever you can imagine, “large language model” grounds you in reality. Right now, today, LLMs can do some things very well, and a lot of things poorly. They happen to excel at pattern matching, which in turn makes them perform well in real world tasks like code completion, voice/tone/language manipulation, and summarization.

Steady’s entire purpose is to distill the overwhelming amount of information people need to absorb on a daily basis into something that’s easy to digest. Sufficed to say; a tool that’s adept at summarization is something we’re very interested in. But we want to do it right. Our guiding principles for using LLMs with Steady:

Accuracy trumps all

If LLM-backed features we build have a 50% hallucination rate, no one will trust the output and the whole exercise is futile. Worse than futile; we’d actively be injecting more noise into the mix. If we can’t build a trustworthy version of a feature, we won’t build the feature.

Cyborgs, not robots

A large language model cannot replace human intelligence, experience, or intuition, and it would be irresponsible for us to try and do so. What LLMs can do, is take grunt work off of people’s plate so they can focus on the parts of their job where intelligence, experience, and intuition matter most. As a rule, all of our features will trace back to human inputs. We’re not in the “generative AI” business.

Bring the receipts

“Trust us” should never be the answer when it comes to determining whether LLM output is trustworthy or not. All LLM output should be easily verifiable so that it can earn trust, and keep trust once it’s been earned.

Keep private customer data private

Customers trust us with some of their most vital, sensitive information. It’s why we’re SOC2 type 2 compliant, and why it’s critical for us to treat customer data with care when it comes to LLMs. We will never use your data to train models. We will never use the API of a vendor who uses data for training purposes. Read our security and privacy policies for more.

If you’re a potential or current customer and have questions or concerns about our approach to large language models, we’d be happy to answer them.