States like Washington are creating policies for the usage of generative AI

Governmental bodies are developing standards for employing generative AI more frequently, while also considering the ethical, moral, and potential cyber security problems that accompany “AI.”

generative AI
Government organizations are increasingly creating guidelines for using generative artificial intelligence while taking into account the legal, moral, and potential cyber security risks that surround “AI.” (Photo: LinkedIn)

Interim rules published for the use of generative artificial intelligence

In an article from The Center Square, the Office of the Chief Information Officer of Washington State published interim rules this week for the appropriate and meaningful use of generative artificial intelligence in Washington State Government. The office, which is a part of the governor’s executive cabinet and sets the state’s information technology strategy and direction, adopted the AI standards on August 8.

The information technology office observed that the quick development of AI has the potential to change how state employees carry out their work, changing how government business procedures are conducted, and eventually boosting government efficiency.

The policy instructs state agencies and personnel to promote public trust, support commercial objectives, and assure ethical, transparent, accountable, and responsible application of this technology. However, AI also raises new and demanding challenges.

Generative AI can produce text, images, audio, and video resources that typically need human intellect when asked to do so by a user. Massive volumes of internet data are scanned by a variety of systems, including ChatGPT, Google AI, Microsoft Azure, IBM Watson, and others, in order to identify patterns and relationships. These systems then create new material that may be somewhat related to the original data but not exactly. Search engines and other web tools already make use of the technology.

A wide range of people have been interested in AI, from teachers who are questioning the originality and plagiarism of students’ assignments to members of Congress who are funding its future research and thinking about potential regulation.

A week ago, the U.S. Sen. Patty Murray, a Democrat from Washington, stopped at the Computer Science and Engineering School at the University of Washington to speak with academics about the advancement of AI.

READ ALSO: FBI Issues A Warning To All Smartphone Users About An AI Call That Raids Banks And Steals Millions Of Dollars

Funding to launch AI Institute

In a press release, Murray, who was instrumental in securing $20 million in federal funding to launch the National Science Foundation AI Institute for Dynamic Systems at UW, stated that artificial intelligence presents enormous prospects as well as significant difficulties and threats.

Cryptopolitan reported that the recently developed policy in Washington State states that it will adhere to the guidelines outlined in the National Institute of Standards and Technology AI Risk Framework.

The City of Seattle was one of many governmental bodies throughout the nation that started putting interim AI policies into effect this spring.

While municipal authorities continue to look into the consequences for using generative AI in the government, Seattle’s interim policy is in effect until October 31.

Among other things, AI can draft communications for employees, do research, compile material, and create software code. However, there are legal concerns with regard to accuracy, the creation of offensive or discriminatory content, the potential revelation of contractual or collective bargaining topics, and susceptibility to data breaches.

READ ALSO: Sovereign Credit Ratings Continue Dropping Following Extreme Weather Events, Economic Struggles

Leave a Comment