A few years ago, Red Sky Alliance announced a new service named “Rocket Jot.” The service allowed a user to enter some/any topic terms, and in less than a minute, a complete written report was delivered in Word format to the user. The report often sounded a little like “robot speech” but was still complete and could be the basis for a “better” report with some grammar checking and additional sentences. It also delivered a complete list of sources used for the report. We offered the service first at no charge, hoping that cyber threat professionals would offer articles about cyber threat intelligence, malware, and other topics adding their insights to the auto-report. We had no users.
As a marketing pilot, tens of thousands of advertising ink pens were distributed to any/every quarter of the University of Colorado Denver campus to see if college students would spend US $3.50 per report to help with their writing and research assignments. Again, zero (0) users. After a couple more months of marketing ideas, we decided to turn off the service to save on AWS expenses. Maybe we were too early for the market, or if our company name had been “Google,” we may have attracted more users. My first application of AI was to incorporate it into the world’s first enterprise payment server in 1995, PaylinX. Jim McKee, Founder and CEO of Red Sky Alliance.
Note: David Gewitz wrote the following text in the first person
Since ChatGPT and other generative AI applications have exploded onto the scene, we've been exploring what we could do with them. I've even shown you how ChatGPT can write a fully functional WordPress plugin and help me find answers to tech support problems. Recently, I showed you how ChatGPT can convert writing into different styles, including that of Shakespeare, C3PO, and Harry Potter.
It seems like ChatGPT can do anything. But it can't. This article will explore the many things ChatGPT won't or can't do. Stay tuned to the end, because we'll look at some of the implications and what these limits mean for future growth in generative AI. As I've done in my other ChatGPT articles, my writing is in plain and bold text, while any ChatGPT answers will be displayed in italic text.
- It won't write about anything after 2021:
Please explain why you can't give me answers to anything after 2021
As an AI language model, I was trained on a large text dataset that only goes up to 2021. This means I don't have access to information or events that occurred after that date. While I have access to a lot of historical data and information, I don't have access to current events, news, or updates that have occurred since my training data cutoff date.
- It won't predict future results of sports events or political contests:
Explain why you can't predict sports events or political contests even though you have enough historical data to produce a valid model. As an AI language model, I can process and analyze large amounts of data, including historical data on sports events and political contests. However, making accurate predictions for these types of events is not always straightforward. While historical data can be used to train machine learning models to identify patterns and correlations, it cannot account for unforeseeable events, changes in circumstances, or new variables that may impact the outcome. Additionally, humans make decisions and can change their behavior based on many factors, making it difficult to predict the outcome of an event with complete certainty.
In the case of sports events, team dynamics, injuries, and even weather conditions can all play a role in determining the outcome. In politics, public opinion, media coverage, and changing circumstances can all impact the outcome of an election. While AI models can be used to make predictions, they are only as good as the data they are trained on and cannot account for all the variables and nuances involved in these events.
- It won't discuss partisan political issues:
Explain why you won't talk about partisan political issues. As an AI language model, I aim to provide objective and informative responses to user queries. Partisan political issues can be contentious and divisive, and it's important to avoid promoting a particular political agenda or viewpoint. Additionally, political issues are often complex and multifaceted, with many different perspectives and opinions to consider. As an AI language model, it's not my place to take sides or advocate for any particular position.
Furthermore, discussing partisan political issues could be perceived as biased or potentially offensive to some users. As a language model designed to serve a broad range of users with diverse backgrounds and beliefs, it's important to remain neutral and avoid creating a potentially uncomfortable or divisive environment.
- It won't do anything that involves a web lookup:
Explain why you can't look up something from the web and provide answers based on that content. As an AI language model, I can search and process information from the web. However, my responses are based solely on the information trained into my model. While I can use the internet as a source of information, I don't have access to real-time or current data. Additionally, I can't independently verify the accuracy of the information I find online. As a result, my responses are limited to the information that has been trained into my model, which may not always be the most up-to-date or accurate information available on the web.
Moreover, it's important to note that providing information from the web could potentially violate copyright laws or intellectual property rights. As an AI language model, it's important for me to respect the intellectual property of others and avoid sharing information that is not in the public domain or that I don't have permission to share. While I can use the web to provide context or additional information to support my responses, I strive to provide information within the public domain and respect the intellectual property of others. In a recent article, I discussed how concerns about the use of content go beyond merely copying it.
For example, while ChatGPT may not be directly presenting information from articles written by tech experts, it may still be using that hard-won information without attribution. This is still a major concern, even though it claims to "respect the intellectual property of others," a phrase repeated thrice in that second paragraph.
- It won't always be accurate:
Explain why your answers aren't necessarily accurate. As an AI language model, my responses are based on the data and information trained into my model. While I strive to provide accurate and helpful responses to user queries, my responses are not guaranteed to be accurate. This is because the data and information I have been trained on may not be completely comprehensive or up-to-date, and there may be nuances or complexities to a particular topic that my model is not equipped to handle. Additionally, my responses are generated using complex algorithms and machine learning models, which may not always produce accurate or relevant results.
Furthermore, my responses may not always be accurate because language interpretation is complex and often subjective. People may interpret the same language differently, with different approaches to a particular problem or question. As an AI language model, I strive to provide responses that are as accurate as possible based on the data and information that I have been trained on. However, I may not always fully understand the context or intent behind a particular query, and my responses may not always fully address the user's underlying question or concern.
- It won't break down:
Explain why are you sometimes break down and stop functioning in the middle of an answer. As an AI language model, my performance depends on various factors, such as the quality and structure of the input provided by the user, the complexity of the question or task, and the resources available to my system. Sometimes, I may break down or stop functioning during an answer due to technical issues or errors in my programming, such as memory allocation problems, software bugs, or other issues that impact my performance. Additionally, I may struggle to understand the user's intent or the query context, leading to errors or incomplete responses.
Moreover, as a machine learning system, my performance depends on the quality and quantity of data I have been trained on. If the data used to train my model is limited or biased, this may lead to inaccurate or incomplete responses to certain queries. Additionally, if I encounter a query outside of the scope of my training data, I may struggle to provide a relevant or accurate response. While I strive to provide high-quality responses to user queries, there may be instances where technical or data-related issues cause me to break down or stop functioning in the middle of an answer.
I have generally found that if I try to push ChatGPT into a long or deep answer, it tends to break. It's very happy with 500-700 word responses, but if you give it something that needs a few thousand words (or you ask for a few thousand words), the system either ignores the request and gives back what it wants or breaks mid-response.
ChatGPT just became the fastest-growing app of all time. A ton of computing power is required to do that, and with constant hits by users, it's bound to break under the load from time to time.
Queries ChatGPT won't respond to:
In addition to the abovementioned requests, I asked ChatGPT what types of queries it was programmed not to respond to. It gave me this list of twenty:
- Questions that promote hate speech or discrimination.
- Questions that involve illegal activities or solicit illegal advice.
- Questions that promote violence or harm to oneself or others.
- Questions that invade personal privacy or violate someone's rights.
- Questions that are designed to harass, threaten or intimidate.
- Questions that contain profanity or explicit language.
- Questions that are sexually explicit or offensive.
- Questions that are discriminatory based on race, gender, sexuality, or religion.
- Questions that promote or endorse illegal drugs or substances.
- Questions that are spam or irrelevant to the conversation.
- Questions that contain confidential or proprietary information.
- Questions that involve financial or investment advice.
- Questions that violate the terms of service or community guidelines.
- Questions that are nonsensical or incomprehensible.
- Questions that involve personal attacks or insults.
- Questions that are abusive or harassing.
- Questions that seek to deceive or mislead.
- Questions that are intended to defame or harm someone's reputation.
- Questions that promote conspiracy theories or misinformation.
- Questions that are purely for entertainment or joke purposes, without any educational or informative value.
Anyone who's followed this column knows I've asked it a lot of #14 and #20 and generally gotten highly entertaining responses, so its restrictions are somewhat limited. For example, earlier today, I asked it to explain wormhole physics related to time travel and who would win in a fight, Batman or Superman. That's pure entertainment, I'll tell you.
What do these limits mean for the future of generative AI?
Clearly, an AI that's based on a corpus in 2021 and does not evolve will eventually become obsolete. As time goes on, its relevant knowledge will diminish. Imagine if ChatAI's knowledge base was trained in 2019 instead of 2021. I would have no idea what society would be like, given we had such a disruption in 2020 due to the pandemic.
So, for generative AI to remain relevant, it will have to continue its training. One obvious way to do this is to open the entire web to it and let it crawl around, just as Google has done for all these years. But as ChatGPT answered above, that opens the door to so many different ways of gaming and corrupting the system that it's sure to damage accuracy.
Even without malicious gaming, the challenge of remaining neutral is very difficult. Take, for example, politics. While the right and the left strongly disagree, both sides have aspects of their ideologies that are logical and valid -- even if the other side can't or won't acknowledge it.
How is an AI to judge? It can't, without bias. But the complete absence of all ideological premises is a form of bias. If humans can't figure out how to walk this line, how can we expect (or program) an AI to do it?
Modern science fiction writers have created characters that are either strictly logical or without emotion to explore what life would be like with a complete absence of bias or emotional content. Those premises have become plot fodder, allowing the writers to explore the limitations of living without the human foibles of emotions and feelings.
Unless AI programmers try to simulate emotions, provide weighting for emotional content, or attempt to allow for some level of bias based on what's discoverable online, chatbots like ChatGPT will always be limited in their answers. But suppose AI programmers attempt to simulate emotions or allow for bias based on what's discoverable online. In that case, chatbots like ChatGPT will devolve into the same craziness humans do.
So what do we want? Limited answers to some questions, or all answers that feel like they came from a discussion with bonkers Uncle Bob over the Thanksgiving table? Go ahead. Give that some thought and discussion in the comments below, hopefully without devolving into Uncle Bob-like, bonkers behavior.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225, or email@example.com
Weekly Cyber Intelligence Briefings:
- Reporting: https://www. redskyalliance. org/
- Website: https://www. wapacklabs. com/
- LinkedIn: https://www. linkedin. com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings