Google’s parent company, Alphabet Inc. (GOOGL), has cautioned its employees about the use of chatbots, including its own Bard, due to concerns over data confidentiality and security. While Google actively markets the chatbot program worldwide, it has advised its employees not to input confidential information into AI chatbots, as there is a risk that the data absorbed during training could be reproduced or leaked. Alphabet has also warned engineers against direct use of computer code generated by chatbots.
Why it matters: This cautionary approach reflects Google’s desire to avoid business harm as it competes with ChatGPT, backed by OpenAI and Microsoft. It also aligns with the growing security standard adopted by corporations, which involves warning personnel about using publicly available chat programs. The concerns highlight the need to safeguard sensitive information and adhere to privacy standards when using AI chatbots.
- As chatbots become more sophisticated, there is a risk that sensitive data could be leaked or accessed by unauthorized individuals, potentially leading to reputational and financial damage for businesses.
- Many companies, including industry leaders like Google, are implementing guardrails and cautioning employees about using publicly available chat programs. This reflects a growing recognition of the need to prioritize data security and privacy, ensuring that employees are aware of the risks associated with these technologies.
- The cautionary approach adopted by Google also highlights the intense competition in the AI market, particularly in the realm of chatbots.