AI chatbots: A boon or a bane? NCSC weighs in on cyber risks

AI chatbots are algorithms that can generate human-sounding interactions, also known as large language models (LLMs).

LLMs are seeing early use as chatbots for internet searches, customer service, and sales calls.

British National Cyber Security Centre (NCSC) warns that LLMs could carry cyber risks, especially if integrated into other business processes.

NCSC says that hackers could trick chatbots into performing harmful tasks by feeding them rogue commands or fooling them into bypassing their safeguards.

NCSC cites examples of LLMs such as OpenAI’s ChatGPT, which can generate realistic text and images, and ChatGPT and other technologies that have potential for good and ill.

 NCSC advises organisations to be careful when using LLMs, as they are still in beta stage and not fully understood or tested.

NCSC suggests some possible ways to regulate LLMs, such as creating ethical standards, ensuring transparency and accountability, and fostering public awareness and education.

 NCSC urges the government and the industry to work together to ensure the safety and benefits of LLMs.