openai - X-Industry - Red Sky Alliance2024-03-28T16:41:17Zhttps://redskyalliance.org/xindustry/feed/tag/openaiNation State Hackers Deploying Artificial intelligence (AI)https://redskyalliance.org/xindustry/nation-state-hackers-deploying-artificial-intelligence-ai2024-03-15T16:00:00.000Z2024-03-15T16:00:00.000ZJim McKeehttps://redskyalliance.org/members/JimMcKee<div><p><a href="{{#staticFileLink}}12400254075,RESIZE_400x{{/staticFileLink}}"><img class="align-left" src="{{#staticFileLink}}12400254075,RESIZE_400x{{/staticFileLink}}" alt="12400254075?profile=RESIZE_400x" width="250" /></a>Cyber security is undergoing a massive transformation, with Artificial intelligence (AI) at the forefront of this change, posing both a threat and an opportunity. AI can potentially empower organizations to defeat cyberattacks at machine speed and drive innovation and efficiency in threat detection, hunting, and incident response. Adversaries can use AI as part of their exploits. It is never been more critical for us to design, deploy, and use AI securely.</p>
<p>See: <a href="https://redskyalliance.org/xindustry/llm-gpt-ai">https://redskyalliance.org/xindustry/llm-gpt-ai</a></p>
<p>Microsoft representatives say it has detected threats from foreign countries that used or attempted to exploit generative AI it had developed. According to a recent Microsoft Report, state-backed hackers from Russia, China, and Iran have been using tools from Microsoft-backed OpenAI to improve their skills. While computer users of all types have been experimenting with large language models to help with programming tasks, translate phishing emails, and assemble attack plans, the new report is the first to associate top-tier government hacking teams with specific uses of the Large Language Model training technology to create Generative AI. It is also the first report on countermeasures. It comes amid a continuing debate about the risks of rapidly developing technology and efforts by many countries to put some limits on its use.</p>
<p>US adversaries, chiefly Iran and N. Korea, and to a lesser extent Russia and China, are beginning to use Generative Artificial Intelligence to mount or organize offensive cyber operations. In collaboration with business partner OpenAI, analysts have detected and disrupted many threats that used or attempted to exploit AI technology they had developed, Microsoft has recently reported.</p>
<p>In a blog post, the company said the techniques were “early-stage” and neither “particularly novel or unique” but that it was important to expose them publicly as US rivals leveraging large-language models to expand their ability to breach networks and conduct influence operations.</p>
<p>Cyber security firms have long used machine learning on defense to detect network anomalous behavior. Since criminals and offensive hackers use it as well, and the introduction of large-language models led by OpenAI’s ChatGPT upped that game of cat-and-mouse.</p>
<p>Microsoft has invested billions of dollars in OpenAI, and its announcement coincided with its release of a report noting that generative AI is expected to enhance malicious social engineering, leading to more sophisticated deepfakes and voice cloning. This is a threat to democracy in a year where over 50 countries will conduct elections, magnifying disinformation is already occurring. Microsoft has provided some examples. Each case said all generative AI accounts and assets of the named groups were disabled.</p>
<p>The North Korean cyber-espionage group known as Kimsuky has used the models to research foreign think tanks that study the country and to generate content likely to be used in spear-phishing hacking campaigns.</p>
<p>Iran’s Revolutionary Guard has used large-language models to assist in social engineering, troubleshoot software errors, and even study how intruders might evade detection in a compromised network.</p>
<p>That includes generating phishing emails “including one pretending to come from an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism.” The AI helps accelerate and boost email production.</p>
<p>The Russian GRU military intelligence unit known as Fancy Bear has used the models to research satellite and radar technologies that may relate to the war in Ukraine.</p>
<p>The Chinese cyber-espionage group Aquatic Panda, which targets a broad range of industries, higher education, and governments from France to Malaysia, has interacted with the models “in ways that suggest a limited exploration of how LLMs can augment their technical operations.” The Chinese group Maverick Panda, which has targeted US defense contractors amongst other sectors for more than a decade, had interactions with large-language models suggesting it was evaluating their effectiveness as a source of information “on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.”</p>
<p>In another blog, OpenAI said its current GPT-4 model chatbot offers “only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI-powered tools.”</p>
<p>In April 2023, the director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, told Congress that “there are two epoch-defining threats and challenges. One is China, and the other is artificial intelligence.” Easterly said the US needed to ensure AI is built with security in mind.</p>
<p>Critics of the public release of ChatGPT in November 2022 have argued it was irresponsibly hasty, considering security was largely an afterthought in their development. “Of course, bad actors are using large-language models, and that decision was made when Pandora’s Box was opened,” said Amit Yoran, chief executive of the cyber security firm Tenable.</p>
<p>Some cybersecurity professionals complain about Microsoft’s creation and marketing of tools to address vulnerabilities in large-language models when it might more responsibly focus on making them more secure.</p>
<p>Former AT&T chief security officer Edward Amoroso has observed that while AI and large-language models may not pose an immediately obvious threat, they “will eventually.</p>
<p> </p>
<p><em>This article is presented at no charge for educational and informational purposes only.</em></p>
<p>Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments, or assistance, please contact the office at 1-844-492-7225 or feedback@redskyalliance.com </p>
<ul>
<li>porting: https://www. redskyalliance. org/</li>
<li>Website: https://www. redskyalliance. com/</li>
<li>LinkedIn: https://www. LinkedIn. com/company/64265941 </li>
</ul>
<p>Weekly Cyber Intelligence Briefings:</p>
<p>REDSHORTS - Weekly Cyber Intelligence Briefings</p>
<p><a href="https://attendee.gotowebinar.com/register/5504229295967742989">https://attendee.gotowebinar.com/register/5504229295967742989</a></p></div>ChatGPT Went Berserkhttps://redskyalliance.org/xindustry/chatgpt-went-berserk2024-03-01T14:30:00.000Z2024-03-01T14:30:00.000ZBill Schenkelberghttps://redskyalliance.org/members/BillSchenkelberg<div><p><a href="{{#staticFileLink}}12391512463,RESIZE_584x{{/staticFileLink}}"><img class="align-left" src="{{#staticFileLink}}12391512463,RESIZE_400x{{/staticFileLink}}" width="250" alt="12391512463?profile=RESIZE_400x" /></a>ChatGPT started throwing out “unexpected responses” on the evening of 20 February 2024 according to OpenAI’s status page. Users posted screenshots of their ChatGPT conversations full of wild, nonsensical answers from the AI chatbot. “We are investigating reports of unexpected responses from ChatGPT,” said OpenAI on its status page at 6:40 pm ET that Tuesday night. “We’re continuing to monitor the situation,” the company updated the page at 7:59 pm.<a href="#_ftn1">[1]</a></p>
<p>OpenAI says the issue has been resolved as of 11:14 am the following day. “An optimization to the user experience introduced a bug with how the model process language,” said OpenAI in a stats update labeled “Postmortem.” Large Language Models (LLM) use probabilities to figure out which word comes next in a sentence. OpenAI says this bug was located in the step where the model chooses these probabilities. This ended up producing word sequences that made no sense. “Clearly, something is very wrong with ChatGPT right now,” posted one user on the ChatGPT subreddit. The poster noted responses begin normally, then “devolve into nonsense.”</p>
<p>“Is my GPT having a stroke?” said another user. “The responses are getting progressively more incomprehensible,” followed by a nonsense response from ChatGPT.</p>
<p>See: <a href="https://redskyalliance.org/xindustry/chatgpt-review">https://redskyalliance.org/xindustry/chatgpt-review</a></p>
<p>Many shocking, nonsensical responses from ChatGPT were posted on X and Reddit from that Tuesday night into Wednesday morning. ChatGPT’s responses were some mix of English, Spanish, and straight jibberish. There were also a few emojis thrown in there. In some cases, ChatGPT was simply repeating the same phrase over and over again, until it filled up a user’s screen. The bug even affected ChatGPT Enterprise, according to one user’s post on X. One user posted a video of ChatGPT writing a lengthy, manic essay in response to a simple question.</p>
<p>OpenAI did not immediately respond to reporters’ request for comment. It is unclear at this time what is causing this bug with ChatGPT, but it appears to be widespread and different from typical outages. OpenAI’s status page is typically used to report outages and heavy traffic, but the “unexpected responses” warning is unusual. Users did not have any trouble with ChatGPT as of Wednesday morning, but other user complaints were still flowing in during the early morning. The problem appears to be resolved on OpenAI’s status page as of 11:14 am on the 21<sup>st</sup>. </p>
<p>Jim McKee, CEO of Red Sky Alliance Corporation stated on 23 February 2024, “This is one more reason to learn the skills to do your own research and write your own articles.” Pay attention during English class. </p>
<p>This article is presented at no charge for educational and informational purposes only.</p>
<p>Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com </p>
<p>Weekly Cyber Intelligence Briefings:</p>
<ul>
<li>Reporting: https://www. redskyalliance. org/</li>
<li>Website: https://www. redskyalliance. com/</li>
<li>LinkedIn: https://www. linkedin. com/company/64265941 </li>
</ul>
<p>Weekly Cyber Intelligence Briefings:</p>
<p>REDSHORTS - Weekly Cyber Intelligence Briefings</p>
<p><a href="https://attendee.gotowebinar.com/register/5504229295967742989">https://attendee.gotowebinar.com/register/5504229295967742989</a></p>
<p><a href="#_ftnref1">[1]</a> <a href="https://gizmodo.com/chatgpt-gone-berserk-giving-nonsensical-responses-1851273889">https://gizmodo.com/chatgpt-gone-berserk-giving-nonsensical-responses-1851273889</a></p></div>Weaponizing AI in Cyber-Attackshttps://redskyalliance.org/xindustry/weaponizing-ai-in-cyber-attacks2024-02-29T17:00:00.000Z2024-02-29T17:00:00.000ZJim McKeehttps://redskyalliance.org/members/JimMcKee<div><p><a href="{{#staticFileLink}}12390146467,RESIZE_400x{{/staticFileLink}}"><img class="align-left" src="{{#staticFileLink}}12390146467,RESIZE_400x{{/staticFileLink}}" width="250" alt="12390146467?profile=RESIZE_400x" /></a>It is no longer theoretical; the world's major powers are working with large language models to enhance offensive cyber operations. Advanced persistent threats (APTs) aligned with China, Iran, North Korea, and Russia use large language models (LLMs) to enhance their operations. New blog posts from OpenAI and Microsoft reveal that five prominent threat actors have used OpenAI software for research, fraud, and other malicious purposes. After identifying them, OpenAI shuttered all their accounts. Though the prospect of AI-enhanced nation-state cyber operations might at first seem daunting, there is good news: none of these LLM abuses observed so far have been particularly devastating. "Current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool," Microsoft noted in its report. "Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI. Current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool," Microsoft noted in its report. "Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI."<a href="#_ftn1">[1]</a></p>
<p>The nation-state APTs using OpenAI today are among the world's most notorious. Consider the group Microsoft tracks as Forest Blizzard, but it is better known as Fancy Bear. The Democratic National Committee (DNC) – hacking; Ukraine-terrorizing; Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU)-affiliated military unit has been using LLMs for basic scripting tasks, file manipulation, data selection, multiprocessing, and so on as well as intelligence gathering, researching satellite communication protocols, and radar imaging technologies, likely as they pertain to the ongoing war in Ukraine.</p>
<p>See: <a href="https://redskyalliance.org/xindustry/fancy-bear-imposters-us-election">https://redskyalliance.org/xindustry/fancy-bear-imposters-us-election</a></p>
<p>Two Chinese state actors have been ChatGPT-ing lately: Charcoal Typhoon (aka Aquatic Panda, ControlX, RedHotel, BRONZE UNIVERSITY), and Salmon Typhoon (aka APT4, Maverick Panda). The former has been making good use of AI for both pre-compromise malicious behaviors, gathering information about specific technologies, platforms, and vulnerabilities, generating and refining scripts, and generating social engineering texts in translated languages as well as post-compromise, performing advanced commands, achieving deeper system access, and gaining control in systems.</p>
<p>Salmon Typhoon has primarily focused on LLMs as an intelligence tool, sourcing publicly available information about high-profile individuals, intelligence agencies, internal and international politics, and more. It has also largely unsuccessfully attempted to abuse OpenAI to help develop malicious code and research stealth tactics.</p>
<p>Iran's Crimson Sandstorm (Tortoiseshell, Imperial Kitten, Yellow Liderc) is using OpenAI to develop phishing material emails pretending to be from an international development agency, for example, or a feminist group, as well as code snippets to aid their operations for web scraping, executing tasks when users sign in to an app, and so on.</p>
<p>See: <a href="https://redskyalliance.org/xindustry/more-bad-kittens">https://redskyalliance.org/xindustry/more-bad-kittens</a></p>
<p>Finally, there is Kim Jong-Un's Emerald Sleet (Kimsuky, Velvet Chollima), which, like the other APTs, turns to OpenAI for basic scripting tasks, phishing content generation, and researching publicly available information on vulnerabilities, as well as expert think tanks, and government organizations concerned with defense issues and its nuclear weapons program.</p>
<p>See: <a href="https://redskyalliance.org/xindustry/kimsuky-again">https://redskyalliance.org/xindustry/kimsuky-again</a></p>
<p>If these many malicious uses of AI seem helpful, but not science fiction-level relaxed, there's a reason why. "Threat actors that are effective enough to be tracked by Microsoft are likely already proficient at writing software," Joseph Thacker, principal AI engineer and security researcher at AppOmni, explains. "Generative AI is amazing, but it's mostly helping humans be more efficient rather than making breakthroughs. I believe those threat actors are using LLMs to write code (like malware) faster, but it's not noticeably impactful because they already have malware. They still have malware. They may be able to be more efficient, but at the end of the day, they aren't doing anything new yet."</p>
<p>Though cautious not to overstate its impact, Thacker warns that AI still offers advantages for attackers. "Bad actors will likely be able to deploy malware at a larger scale or on systems they previously didn't have support for. LLMs are pretty good at translating code from one language or architecture to another. So, I can see them converting their malicious code into new languages they previously weren't proficient in," he says. Further, "if a threat actor found a novel use case, it could still be in stealth and not detected by these companies yet, so it's not impossible. I have seen fully autonomous AI agents that can 'hack' and find real vulnerabilities, so if any bad actors have developed something similar, that would be dangerous." For those reasons, he adds, "Companies can remain vigilant. Keep doing the basics right."</p>
<p> </p>
<p><em>This article is presented at no charge for educational and informational purposes only.</em></p>
<p>Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. Call for assistance. For questions, comments, a demo, or assistance, please get in touch with the office directly at 1-844-492-7225 or feedback@redskyalliance.com </p>
<p> </p>
<p>Reporting: <a href="https://www.redskyalliance.org/">https://www.redskyalliance.org/</a></p>
<p>Website: <a href="https://www.redskyalliance.com/">https://www.redskyalliance.com/</a></p>
<p>LinkedIn: <a href="https://www.linkedin.com/company/64265941">https://www.linkedin.com/company/64265941</a></p>
<p><strong>Weekly Cyber Intelligence Briefings:</strong></p>
<p>REDSHORTS - Weekly Cyber Intelligence Briefings</p>
<p><a href="https://attendee.gotowebinar.com/register/5993554863383553632">https://attendee.gotowebinar.com/register/5993554863383553632</a></p>
<p> </p>
<p><a href="#_ftnref1">[1]</a> <a href="https://www.darkreading.com/threat-intelligence/microsoft-openai-nation-states-are-weaponizing-ai-in-cyberattacks">https://www.darkreading.com/threat-intelligence/microsoft-openai-nation-states-are-weaponizing-ai-in-cyberattacks</a></p></div>LLM, GPT & AIhttps://redskyalliance.org/xindustry/llm-gpt-ai2023-06-07T11:50:00.000Z2023-06-07T11:50:00.000ZBill Schenkelberghttps://redskyalliance.org/members/BillSchenkelberg<div><p><a href="{{#staticFileLink}}11421452658,RESIZE_584x{{/staticFileLink}}"><img class="align-left" src="{{#staticFileLink}}11421452658,RESIZE_400x{{/staticFileLink}}" alt="11421452658?profile=RESIZE_400x" width="250" /></a>ChatGPT is a large language model (LLM) falling under the broad definition of generative AI. The sophisticated chatbot was developed by OpenAI using the Generative Pre-trained Transformer (GPT) model to understand and replicate natural language patterns with human-like accuracy. The latest version, GPT-4, exhibits human-level performance on professional and academic benchmarks. Without question, generative AI will create opportunities across all industries, particularly those that depend on large volumes of natural language data.</p>
<p>Generative AI as a security enabler - Enterprise use cases are emerging with the goal of increasing the efficiency of security teams conducting operational tasks. Products such as Microsoft’s Security Co-Pilot draw upon the natural language processing capabilities of generative AI to simplify and automate certain security processes. This will alleviate the resource burden on information security teams, enabling professionals to focus on technically demanding tasks and critical decision making. In the longer term, these products could be key to bridging the industry’s skills gap.</p>
<p>While the benefits are clear, the industry should anticipate that the mainstream adoption of AI is likely to occur at glacial speeds. Research by PA Consulting found that 69% of individuals are afraid of AI and 72% say they don’t know enough about AI to trust it. Overall, this analysis highlights a reluctance to incorporate AI systems into existing processes.</p>
<p>Generative AI as a cyber security threat - In contrast, there are concerns that AI systems like ChatGPT could be used to identify and exploit vulnerabilities, given its ability to automate code completion, code summarization, and bug detection. While concerning, the perception that ChatGPT and similar generative AI tools could be used for malware development is oversimplified.</p>
<p>In its current state, the programming capabilities of generative AI are limited and often produce inaccurate code or ‘hallucinations’ when writing functional programs. Even generative AI tools that are fine-tuned for programming languages show limited programming potential, performing well when resolving easy Python coding interview questions, but struggling with more complex problems. And, while there are examples of malware developed using generative AI, these programs are written in Python, which is impractical for real world use. Ultimately, adversaries seeking to develop malware will not gain further advantages from generative AI in comparison to existing tools or techniques. Currently, it is still in its infancy, but the AI arms race currently being waged by ‘big-tech’ organizations is likely to result in more powerful and reliable models. Managing this shifting threat landscape requires a proactive and dynamic risk posture.</p>
<p>Organizations should not completely dismiss today’s security threats posed by ChatGPT and other generative AI models. LLMs are extremely effective at imitating human conversation, making it challenging to differentiate generative AI-synthesized text from human discourse. Adversaries could implement generative AI in WhatsApp, SMS, or email to automate conversations with targets, build rapport, and obtain sensitive information. This could be requested directly or gathered by persuading targets to click links to malware. Generative AI may also be used for fraudulent purposes, such as deepfake videos and AI-powered text-to-speech tools for identification spoofing and impersonation.</p>
<p>A proactive approach for organizations - In 2022, human error accounted for 82% of data breaches; with the advent of generative AI tools, this is likely to increase. But while people may be the weakest link, they can also be an organization’s greatest asset.</p>
<p>In response to the changing threat landscape, they must ensure their employees are prepared for more convincing, more sophisticated attacks. Leaders must be visible advocates of change, and ensure their people are well-equipped and informed to manage threats. By building psychological safety into their cyber culture, organizations will empower individuals to report security events such as phishing without fear of retribution. This kind of inclusive, transparent cyber culture will be the key differentiator for those with effective cyber security.</p>
<p>Learn more about this topic - AI-powered cyber security tools have now developed to a point where they are becoming an effective approach to protecting the organisation. Learn how you can benefit from adopting them. SearchSecurity's Risk and Repeat podcast covers the focus on AI-powered security products and uses at RSA Conference 2023 in San Francisco in May 2023, as well as other trends at the show.</p>
<p>Regular corporate communications highlighting emerging threats, case studies, and lessons learned should be supported by regular training that reflects new trends. For example, now that generative AI can write error-free, colloquial prose, it’s no longer possible to identify non-human communication through grammatical errors or robotic sentence structures. By re-evaluating their approach to scam awareness training, organizations should teach employees to verify the recipients of sensitive or personal information.</p>
<p>It is important to keep it simple. The key to a secure culture is implementing straightforward processes and providing accessible training and guidance. Practically, this includes automated nudges to warn colleagues of potentially unsafe actions and HR policies that support a culture of ‘better safe than sorry’.</p>
<p>The way forward - Organizations are staring deeply into the generative AI kaleidoscope, but a watchful eye must be kept on the potential security, privacy, and societal risks posed. They must balance the benefits and threats of introducing AI into their processes and focus on the human oversight and guidelines needed to use it appropriately.</p>
<p>Source: <a href="https://www.computerweekly.com/opinion/Generative-AI-the-next-biggest-cyber-security-threat">Generative AI – the next biggest cyber security threat? | Computer Weekly</a></p>
<p><em>This article is presented at no charge for educational and informational purposes only.</em></p>
<p>Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225, or feedback@redskyalliance.com</p>
<p>Weekly Cyber Intelligence Briefings:</p>
<ul>
<li>Reporting: <a href="https://www.redskyalliance.org/">https://www.redskyalliance.org/</a></li>
<li>Website: <a href="https://www.redskyalliance.com/">https://www.redskyalliance.com/</a></li>
<li>LinkedIn: <a href="https://www.linkedin.com/company/64265941">https://www.linkedin.com/company/64265941</a></li>
</ul>
<p>Weekly Cyber Intelligence Briefings:</p>
<p>REDSHORTS - Weekly Cyber Intelligence Briefings</p>
<ul>
<li><a href="https://attendee.gotowebinar.com/register/5504229295967742989">https://attendee.gotowebinar.com/register/5504229295967742989</a></li>
</ul></div>When AI is no longer your Friendhttps://redskyalliance.org/xindustry/when-ai-is-no-longer-your-friend2023-03-15T13:00:00.000Z2023-03-15T13:00:00.000ZJim McKeehttps://redskyalliance.org/members/JimMcKee<div><p><a href="{{#staticFileLink}}10997392868,RESIZE_400x{{/staticFileLink}}"><img class="align-left" src="{{#staticFileLink}}10997392868,RESIZE_400x{{/staticFileLink}}" alt="10997392868?profile=RESIZE_400x" width="250" /></a>Most of us have had or heard from a friend who has been the target of an email scammer pretending to be a friend in distress who needs money wired out of town or out of the country. Now scammers are using the telephone to inform you that your loved one is in distress. And the caller may sound “just like” your friend/relative. At that moment, your instinct would be to do anything to help them escape danger, including wiring money. My father was a victim of such a scam, but he called me first for advice. His “friend in trouble” was not in Scotland with a stolen wallet, passport, and a lump on his head; he was at his vacation home in Florida. A quick call to that residence and speaking with his friend foiled that scam.</p>
<p>Stop, think, and confirm before you do or commit to doing anything.<a href="#_ftn1">[1]</a></p>
<p>A recent report from The Washington Post featured an elderly couple, Ruth and Greg Card, who fell victim to an impersonation phone call scam. Ruth, 73, got a phone call from a person she thought was her grandson. He told her she was in jail, with no wallet or cell phone, and needed cash fast. As any other concerned grandparent would, Ruth and her husband, 75, rushed to the bank to get the money. It was only after going to the second bank that the bank manager warned them that they had seen a similar case before that ended up being a scam, and this one was likely a scam, too.</p>
<p>This scam is no longer an isolated incident. The report indicates that in 2022, impostor scams were the second most popular racket in America, with over 36,000 people falling victim to calls impersonating their friends and family. Of those scams, 5,100 of them happened over the phone, robbing over $11 million from people, according to FTC officials.</p>
<p>Generative AI has been in the media because of the increasing popularity of generative AI programs, such as OpenAI's ChatGPT and DALL-E. These programs have been mostly associated with their advanced capabilities that can increase user productivity. The same techniques used to train those helpful language models can be used to train more harmful programs, such as AI voice generators.</p>
<p>See: <a href="https://redskyalliance.org/xindustry/ai-voice-replication-may-place-you-on-the-couch">https://redskyalliance.org/xindustry/ai-voice-replication-may-place-you-on-the-couch</a></p>
<p>These programs analyze a person's voice for patterns that make up their unique sounds, such as pitch and accent, to recreate it. Many of these tools work within seconds, producing a sound virtually indistinguishable from the original source.</p>
<p>What can you do to prevent yourself from falling for the scam? The first step is being aware that this type of call is possible. See above: Stop, think, and confirm before doing anything.</p>
<p>If you get a call for help from one of your loved ones, remember that it could be a robot talking instead. To make sure it is actually a loved one, attempt to verify the source. I would hang up the phone immediately. If you are concerned, ask the caller a personal question that only your loved one would know the answer to. This can be as simple as asking them the name of your pet, family member, or other personal facts.</p>
<p>You can also check your loved one's location to see if it matches up with where they say they are. Today, it is common to share your location with friends and family, and in this scenario, it can come in extra handy</p>
<p>You can also try calling or texting your loved one from another phone to verify the caller's identity. You have your answer if your loved one picks up or texts back and does not know what you are talking about.</p>
<p> </p>
<p>Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225, or <a href="mailto:feedback@wapacklabs.com">feedback@wapacklabs.com</a> </p>
<p>Weekly Cyber Intelligence Briefings:</p>
<ul>
<li>Reporting: https://www. redskyalliance. org/ </li>
<li>Website: https://www. wapacklabs. com/ </li>
<li>LinkedIn: https://www. linkedin. com/company/64265941 </li>
</ul>
<p>Weekly Cyber Intelligence Briefings:</p>
<p>REDSHORTS - Weekly Cyber Intelligence Briefings</p>
<p><a href="https://attendee.gotowebinar.com/register/5504229295967742989">https://attendee.gotowebinar.com/register/5504229295967742989</a> </p>
<p> </p>
<p><a href="#_ftnref1">[1]</a> <a href="https://www.zdnet.com/article/scammers-are-using-ai-to-impersonate-your-loved-ones-heres-what-to-watch-for/">https://www.zdnet.com/article/scammers-are-using-ai-to-impersonate-your-loved-ones-heres-what-to-watch-for/</a></p></div>The Future is Herehttps://redskyalliance.org/xindustry/the-future-is-here2023-03-08T12:50:00.000Z2023-03-08T12:50:00.000ZBill Schenkelberghttps://redskyalliance.org/members/BillSchenkelberg<div><p><a href="{{#staticFileLink}}10993893888,RESIZE_710x{{/staticFileLink}}"><img class="align-left" src="{{#staticFileLink}}10993893888,RESIZE_400x{{/staticFileLink}}" width="250" alt="10993893888?profile=RESIZE_400x" /></a>Interested in using ChatGPT? It’s all the rage. Information and instructions can be found here: <a href="https://openai.com/pricing">https://openai.com/pricing</a> You can establish and account and begin using the service. The following is an easy way to learn and understand its capabilities.<a href="#_ftn1">[1]</a></p>
<p>See: <a href="https://redskyalliance.org/xindustry/a-chat-with-chatgpt">https://redskyalliance.org/xindustry/a-chat-with-chatgpt</a></p>
<p>ChatGPT's advanced capabilities have created a huge demand, with the 'app' accumulating over 100 million users within two months of launching. One of the biggest standout features has been its ability to compose all sorts of text within seconds, including songs, poems, bedtime stories, and essays.</p>
<p>Contrary to popular opinion, ChatGPT can do a lot more than just write an essay for you. What is more useful is how it can help guide your writing process. If you are a looking for ways to use ChatGPT to support your writing, here are five different ways to explore.</p>
<p><em>How to improve your writing process with ChatGPT </em></p>
<ol>
<li>Use ChatGPT to generate essay ideas - Before you can even get started writing an essay, you need to define the idea. When professors assign essays, they generally give students a prompt that gives them leeway for their own self-expression and analysis. As a result, students have the task of finding the angle to approach the essay on their own.</li>
</ol>
<p>All you need to do is input the assignment topic, include as much detail as you'd like such as what you are thinking about covering and let ChatGPT do the rest. For example, based on a paper prompt I had in college, I asked: Can you help me come up with a topic idea for this assignment, "You will write a research paper or case study on a leadership topic of your choice." I would like it to include Blake and Mouton's Managerial Leadership Grid and possibly a historical figure.</p>
<p>Within seconds, the chatbot produced a response that provided me with the title of the essay, options of historical figures I could focus my article on, and insight on what information I could include in my paper, with specific examples of a case study I could use.</p>
<ol start="2">
<li>Use the chatbot to create an outline - Once you have a solid topic, it is time to start brainstorming what you actually want to include in the essay. To begin the writing process, I always create an outline, including all the different points I want to touch upon in my essay. However, the outline writing process is usually tedious.</li>
</ol>
<p>With ChatGPT, all you have to do is ask it to write it for you. Using the topic that ChatGPT helped me generate in step one, I asked the chatbot to write me an outline by saying: Can you create an outline for a paper, "Examining the Leadership Style of Winston Churchill through Blake and Mouton's Managerial Leadership Grid"</p>
<p>After a couple of seconds, the chatbot outputted a holistic outline divided into seven different sections, with three different points under each section. This outline is thorough and can be condensed for a shorter essay, or elaborated on for a longer paper. If you don't like something or want to tweak it further, you can do so either manually or with more instructions to ChatGPT.</p>
<ol start="3">
<li>Use ChatGPT to find sources - Now that you know exactly what you want to write, it's time to find reputable sources to get your information from. If you don't know where to start, like with all of the previous steps, you can just ask ChatGPT. All you need to do is ask it to find sources for your essay topic. For example, I asked it the following: Can you help me find sources for a paper, "Examining the Leadership Style of Winston Churchill through Blake and Mouton's Managerial Leadership Grid."</li>
</ol>
<p>The chatbot output seven sources, with a bullet point for each that explained what the source was and why it could be useful.</p>
<p>The one caveat you will want to be aware of when using ChatGPT for sources is that it does not have access to information before 2021, so it will not be able to suggest the freshest sources. However, it is a start. </p>
<ol start="4">
<li>Use ChatGPT to write a sample essay - It is worth noting that if you take the text directly from the chatbot and submit it, your work could be considered a form of plagiarism, since it is not your original work. As with any information taken from another source, text generated by any AI should be clearly identified and credited in your work.</li>
</ol>
<p>In most educational institutions, the penalties for plagiarism are severe, ranging from a failing grade to expulsion from the school. If you want ChatGPT generate a sample piece of text, put in the topic, the desired length, and watch for what it generates. For example, I input the following text: Can you write a five-paragraph essay on the topic, "Examining the Leadership Style of Winston Churchill through Blake and Mouton's Managerial Leadership Grid."</p>
<p>Within seconds, the chatbot output exactly what I asked for: A coherent, five-paragraph essay on the topic which can help you to guide you in your own writing.</p>
<p>At this point it's worth remembering how tools like ChatGPT work: They put words together in a form that they think is statistically valid but they don't know if what they are saying is true or accurate. That means you might find invented facts or details or other oddities. It won't be able to create original work because it is simply aggregating everything it has already absorbed. It might be a useful starting point for your own work, but don't expect it to be inspired or accurate.</p>
<ol start="5">
<li>Use ChatGPT to co-edit your essay - Once you've written your own essay, you can use ChatGPT's advanced writing capabilities to edit it for you. You can simply tell the chatbot what you specifically want it to edit. For example, I asked it to edit for essay structure and grammar, but other options could have included flow, tone, and more.</li>
</ol>
<p>Once you ask it to edit your essay, it will prompt you to paste your text into the chatbot. Once you do, it will output your essay with corrections made. This could be the most useful tool as it can edit your essay more thoroughly than a basic proofreading tool could, going beyond spelling. You could also co-edit with the chatbot, asking it to take a look at a specific paragraph or sentence and asking it to rewrite or fix it for clarity.</p>
<p>ChatGPT also has a negative side and be used for evil in addition to good uses.</p>
<p>See the negative side: <a href="https://redskyalliance.org/xindustry/can-chatgpt-write-malware">https://redskyalliance.org/xindustry/can-chatgpt-write-malware</a></p>
<p>Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or <a href="mailto:feedback@wapacklabs.com">feedback@wapacklabs.com</a> </p>
<p>Weekly Cyber Intelligence Briefings:</p>
<ul>
<li>Reporting: https://www. redskyalliance. org/ </li>
<li>Website: https://www. wapacklabs. com/ </li>
<li>LinkedIn: https://www. linkedin. com/company/64265941 </li>
</ul>
<p>Weekly Cyber Intelligence Briefings:</p>
<p>REDSHORTS - Weekly Cyber Intelligence Briefings</p>
<p><a href="https://attendee.gotowebinar.com/register/5504229295967742989">https://attendee.gotowebinar.com/register/5504229295967742989</a></p>
<p><a href="#_ftnref1">[1]</a> <a href="https://www.zdnet.com/article/how-to-use-chatgpt-to-help-you-write-essays/">https://www.zdnet.com/article/how-to-use-chatgpt-to-help-you-write-essays/</a></p></div>