Interacting With LLMs: How Do You Work With AI?
So How Do You Work With AI?
It’s more helpful to think of working with AI as like working with a human rather than a machine. It’s like working with a medical student. They can do some things very well and some things not so well, the only way to find out is to work with them.
Because they have been trained on human knowledge, they tend to work more like humans. They make mistakes like humans; they are good at human tasks like writing a summary or creative writing tasks. They are good at coding and working with data and summarizing results, but you have to give good instructions.
Because they can do many different things, the instructions you provide the model, or the “prompt” are very important.
We’re going to talk about prompts in a moment, but you need to really understand what tools are available to the AI. Up until recently, ChatGPT had no way to access the internet, and the training data only went up to 2021. Now that cutoff has been updated. Some models you can input documents, images, or videos to. Some can generate images; some can run code.
You need to know the capabilities of the model you’re using and there are upsides and downsides to each AI, which you need to learn. Just like working with different humans.
One of the major pitfalls to be aware of is that AI can produce plausible yet potentially incorrect information due to its ability to "hallucinate" responses, emphasizing the importance of critical evaluation by users. Moreover, AI's outputs can vary day-to-day, and while it's a powerful tool, it's not infallible; understanding its strengths and limitations through frequent use is crucial for effective application.
