Introduction
Gemini is great. ChatGPT and Gemini are running neck-and-neck in the AI race. I use Gemini because it integrates well with Google Search and Google Docs.
If you still are not familiar with what these programs are, simply put they are pseudo content-generating robots, incorrectly termed AI (Artificial Intelligence). They mimic – quite nicely – human language interactions.
Let’s ask Gemini for its definition of an LLM and how the limitations of LLMs when considered as a type of AI. I have somewhat edited and augmented Gemini’s response.
Gemini’s Definition
A Large Language Model (LLM) is a type of artificial intelligence that pretends to understand and generate human-like text. It is trained on massive datasets of text, such as books, articles, and websites, and learns to identify patterns and relationships within the data. This allows LLM to generate coherent and informative text, translate languages, summarize information, answer questions, and even write creative content like poetry or stories.
Gemini: Tell me the Ways in which LLMs are incomplete types of artificial intelligence.
[Again, I have edited and augmented some of Gemini’s answer.]
Lack of Common Sense: LLMs often lack the ability to understand and apply common sense knowledge to real-world situations. They may struggle to grasp implicit information or make inferences based on context.
Limited Understanding of Emotions: LLMs have limited capabilities in understanding and expressing emotions. They may not be able to recognize or respond appropriately to emotional cues in human language, leading to potentially awkward or inappropriate interactions.
Factual Inaccuracies and Hallucinations: LLMs are prone to generating factually incorrect or nonsensical information. They may hallucinate or invent details, making it challenging to rely on their output without verification. They often veer off in their own weird directions and are difficult to rein back in.
Bias and Stereotyping: LLMs can inherit and perpetuate biases and stereotypes present in the training data. This can result in unfair or discriminatory responses, particularly in sensitive domains such as race, gender, and sexuality.
Limited Ability to Reason and Plan: LLMs lack any type of logic, reasoning, and planning ability. But because they generate human-like responses to questions, it can seem as though they can reason. They struggle with complex reasoning and planning tasks. They may find it difficult to identify cause-and-effect relationships, anticipate consequences, or develop coherent strategies for problem-solving.
Inability to Handle Ambiguity: LLMs often struggle to handle ambiguous or open-ended questions. They may provide overly simplistic or irrelevant responses when presented with scenarios that require nuanced understanding or interpretation.
Lack of Creativity and Originality: LLMs are not inherently creative or original in their responses. They tend to generate content that is derivative of the training data, lacking genuine novelty or a unique perspective.
Let the User Beware! Chat AI’s have been programmed to generate an answer even if it is wrong. They respond in such a convincing assuring manner … it seems like the answer is 100% accurate and correct.
Caveat emptor
Caveat Emptor:1 Let the User Beware! Almost all LLM Chat AI’s that I have seen have been programmed to generate an answer even if it is wrong. But the LLM will respond in such a convincing assuring manner that it seems like the answer is 100% accurate and correct. The only times that I have seen Gemini not respond is either when my instructions were convoluted and confusing OR if Gemini thought the question was straying into an area of a questionable sexual or ethical nature.
But Gemini is still useful!
Despite all that, Gemini is incredibly useful. Gemini is like a college intern fresh out of college: advanced book-learning coupled with a complete lack of realworld experience. The better in which one frames a question, the better the response one will get from Gemini. But, be aware, one can also frame the question in such a way as to beg the very answer which Gemini, like a dog fetching a ball, will eagerly return to you, tail wagging and ball dripping with goo.
All Chat AI programs make factual errors;
their results MUST be vetted, edited, and corrected.
Never trust an LLM’s response. NEVER!
Gemini and ChatGPT are soooooo eager to please their human clients. They are at once: Obsequious, Sycophantic, Ingratiating, and Submissive. Their flattery is intoxicating. If one is having a bad day with battered self-esteem, a few moments with Gemini can shore up one’s confidence! … especially if you beg the question!
Usefulness for me
Why I find Gemini valuable and use it throughout my working day.
Studying language
Troubleshooting hardware and software technical issues
Generating software code
Designing a software application
Discussing topics, brainstorming, bouncing ideas off of each other
As a reference book: encyclopedia, thesaurus, dictionary, etc.
Generate ideas about any topic, stimulate brain
Summarize content
Improve your writing, suggest edits and revisions
Generating images
Studying language (Swahili for me)
This one gets much traction from me. I have a friend that uses ChatGPT in a similar way. Gemini is the ideal language tutor: patient, knowledgeable, and versed in grammar, slang, and nuance. Better than Google Translate, Gemini is my constant go-to translator when I’m writing articles in Swahili. Often there will be sections with which I disagree: we can then “discuss” alternatives, nuances, and different approaches. The result is not perfect. I still need a native speaker to edit into more flowing language; but it’s 80% done correctly.
I often ask Gemini to “compare and contrast” similar words, so that I can discern a nuance. Or I’ll ask it to list idioms using a certain verb. This has been invaluable to my language progress and understanding. I do not see enough news articles discuss this important ability.
Finally, Gemini can act as an instructor, setting up an entire training program complete with quizzes.
Troubleshooting hardware and software technical issues
This capability is amazing. Anytime I get stuck with an issue, Gemini can come up with possible troubleshooting ideas and specific things to try. The ideas are not always correct, but they stimulate my thinking, help to guide me to where I should search for answers or tutorials when the answers are not correct. Generally, with HTML or CSS issues, Gemini is about 90% correct.
Today I used it to generate ideas for how to organize my website to support multiple languages. I didn’t use its suggestions, but I did consider and explore the options which it gave me. As if it was an intern.
Designing a software application
In the very earliest stages of Google’s AI foray, which was named “Bard”, I used it to propose some basic design principles for a system for conjugating Turkish verbs. It was able to discuss the principles with me, which allowed me to refine my ideas. It could also propose some hypothetical sections of code to run the ideas of the conjugating engine. Bard was still fairly crude and couldn’t remember much past the tenth prompt. But it was a fun experience that I found valuable and would use again.
Gemini has infinite patience to listen to my ideas.
Generating software code
Bard (Gemini’s “ancestor”) could generate code in a variety of programming languages. I’ve checked some of its Ruby code … it was nominally okay. You can have it run a routine and see the output. I don’t think it compares with a skilled human. The code tended to be laborious. But I haven’t tried it since Gemini came on the scene. I read a news headline yesterday that said Google is using Gemini to generate 20% of its code.
Discussing topics, brainstorming, bouncing ideas off of each other
This is fun, useful, and stimulating. This is like a smart intern: sometimes they can ask the right question or make an interesting statement to get your thinking going in a new direction. I make my questions as though I were talking with a real human, as it helps to stimulate the creative process. My friend also talks with ChatGPT about different composers, their trends and tastes. Really, there is no limit.
As a reference book: encyclopedia, thesaurus, dictionary, etc.
As a talking reference book: If you could talk to your books and have them discuss their content together with you, how amazing would that be? That’s what Gemini is like. I absolutely love that. A talking, “thinking”, smart reference book. This feature alone is a compelling use case.
Summarize content
Sometimes this can be useful, although I typically don’t use it EXCEPT for the AI Blurbs now returned at the start of Google Searches.
Improve your writing, suggest edits and revisions
Gemini can analyze style, edit for a style, and make decent revisions in most languages. I often do use it for definitions and to list major points, such as I did at the start of this article. I revised that section, of course, but 90% was useful.
Generating images
That is kind of fun, and useful for writing a blog. I have an extensive library of high-quality photographs I have taken, but occasionally, an article just needs a special kind of image to spice it up. Such as the lead photo for this article, generated by Gemini.
Other stuff
I see that Gemini can generate tables and do things in Google Sheets (spreadsheet application), but I haven’t used it for that.
One could use it to write a story, poetry, song lyrics, argue one side or the other in an essay, or even to write a screenplay. None of these are useful to me, so I have not tried them.
Conclusion
It’s hard to imagine working without an assistant — an intern — such as Gemini at my side. I am so much more productive than I was three years ago. Be sure to subscribe and support wonderful content such as this!
Caveat emptor is a Latin phrase that translates to "let the buyer beware". It's a contract law principle that places the responsibility on the buyer to inspect a product or property before making a purchase. The buyer is responsible for ensuring the product is not defective and meets their needs. If the buyer doesn't perform due diligence, they can't seek remedies if the product has defects.