The administration of President Joe Biden has taken significant steps to regulate the use of artificial intelligence, a rapidly expanding field transforming both the global economy and daily life. In an effort to mitigate the risks associated with this technology, Biden signed an executive order in October 2023, establishing a series of safeguards to ensure the safe and equitable development of AI. Apple, one of the world's leading technology companies, has decided to adhere to these voluntary regulations, joining other industry giants such as Alphabet, Amazon, Meta, Microsoft, and OpenAI.

 

The Safeguards

Biden's executive order responds to growing concerns about the potential dangers of artificial intelligence, particularly in terms of privacy and equity. This regulation consists of eight key points, ranging from promoting AI safety to protecting civil rights. One of the most notable elements of this order is the requirement for developers of the most advanced AI systems to share their safety test results and other critical information with the U.S. government and civil society.

Apple, which recently launched its suite of AI functions called Apple Intelligence, has decided to comply with these new standards. The company's decision comes shortly after its agreement with OpenAI to integrate AI models into its products, underscoring its commitment to responsible technology development.

 

Challenges and the Future

Although Biden's executive order establishes a detailed framework for regulating artificial intelligence, it is important to note that compliance is not mandatory. Companies are free to adopt these standards voluntarily, meaning the U.S. government must rely on these companies' good faith. Even so, the adherence of tech giants like Apple indicates a recognition of the need for regulation in this area.

In addition to the mentioned points, the regulation includes measures to protect against the misuse of AI in creating dangerous biological materials and the spread of fraud and misinformation. Standards have also been established to detect and authenticate content generated by AI, ensuring that systems are safe and reliable. One of the most critical aspects of the executive order is protecting user privacy. The Biden administration acknowledges that artificial intelligence makes it easier to extract and exploit personal information and has included guidelines to accelerate the development of privacy-preserving techniques and bolster research in this area.

The regulation of artificial intelligence is an ever-evolving field, and Biden's executive order represents an important first step towards a more robust and secure framework for its development. With the adherence of Apple and other major tech companies, it is expected that more industry players will join this initiative, promoting responsible and safe innovation in AI use.