{"id":65519,"date":"2024-07-17T06:00:00","date_gmt":"2024-07-17T10:00:00","guid":{"rendered":"https:\/\/statescoop.com\/?p=65519"},"modified":"2024-07-18T16:04:43","modified_gmt":"2024-07-18T20:04:43","slug":"ai-government-policy-2024","status":"publish","type":"post","link":"https:\/\/statescoop.com\/ai-government-policy-2024\/","title":{"rendered":"AI is beyond government control"},"content":{"rendered":"\n<p>State governments are moving fast to protect their agencies and publics from the potential harms that could be caused by generative artificial intelligence, but, like the internet itself, the rapidly evolving technology is demonstrating a reach and influence beyond any single organization\u2019s control.<\/p>\n\n\n\n<p>Since ChatGPT opened to the public in November 2022, states have birthed hundreds of new bills, executive orders, committees, task forces and policies aimed at preempting the many potential harms that generative AI might exact on their communities, workforces and private data stores. State government\u2019s leading technology officials have, loudly and often, made known their concerns about generative AI\u2019s potential to amplify biases, spread misinformation and disrupt work and personal life.<\/p>\n\n\n\n<p>The validity of those concerns is borne out each day as AI\u2019s presence is felt on social media, in the software ecosystem and in the physical world. Leaders from Google and Microsoft\u2019s public sector businesses told StateScoop they share government\u2019s concerns and pointed to ethics policies, crafted over years of careful deliberation, as strategic bulwarks against potential misuses of AI. Representatives from both companies earnestly espoused an interest in aligning their ethical goals with government\u2019s.<\/p>\n\n\n\n<p>But sometimes even the best intentions can be knocked off track. Rebecca Williams, data governance program manager with the American Civil Liberties Union, pointed to the frequency with which government hires companies that wind up engaging in questionable practices, citing the data-mining company <a href=\"https:\/\/fedscoop.com\/arpah-enters-data-ai-contract-with-palantir\/\">Palantir<\/a> and the identity management firm <a href=\"https:\/\/statescoop.com\/massachusetts-idme-identity-facial-recognition\/\">ID.me<\/a>.<\/p>\n\n\n\n<p>\u201cGovernment has strict guidelines that are more conservative than what the private sector has, but then they procure technologies that don\u2019t necessarily follow government guidelines,\u201d she said. \u201cI think they\u2019re talking a great talk, but I think they\u2019re not just sort of dependent, but heavily dependent on the vendors.\u201d<\/p>\n\n\n\n<p>After this story was originally published, ID.me contacted StateScoop to point out that it complies with federal standards for identity authentication, including those set by the Department of Commerce and the National Institute of Standards and Technology.<\/p>\n\n\n\n<p>&#8220;Our strict adherence to Federal data handling guidelines qualifies us as compliant with NIST requirements, as well as FedRAMP Moderate, amongst other security certifications,&#8221; an ID.me spokesperson wrote in an email.<\/p>\n\n\n\n<p>Williams also pointed to government\u2019s tendency to make decisions based on its \u201cmodel of austerity,\u201d a phenomenon elucidated by the rallying call heard throughout the public sector that it must \u201cdo more with less.\u201d Generative AI, with its human-like output and superhuman speed, promises to save agencies untold hours of costly tedium. And while the IT officials StateScoop interviewed for this story unerringly expressed caution about using generative AI, there may come a time when its value proposition becomes irresistible, or a day comes when it&#8217;s so pervasive it simply can&#8217;t be avoided.<\/p>\n\n\n\n<p>And even more to the point in 2024, government agencies don\u2019t need to procure AI for it to infiltrate their walls. Digital assistants like OpenAI\u2019s ChatGPT or Anthropic\u2019s Claude can be freely accessed online, and, by all accounts, government employees frequently use those tools. An even trickier challenge for officials tasked with governing AI is the growing habit of software companies to plop new AI-powered functions into software already being used by tens of thousands of government employees.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-owning-the-process\">Owning the process<\/h3>\n\n\n\n<p>Each state is managing the AI environment&#8217;s caprice a bit differently. Josiah Raiche, Vermont\u2019s chief data and AI officer, said state leaders created his role, along with a state AI council, because they\u2019re taking seriously the potential for AI misuse to breach the public\u2019s trust.<\/p>\n\n\n\n<p>\u201cIt is important to have somebody senior in the technology organization who really does feel that it\u2019s their job to focus on ethics,\u201d he said. \u201cI think it\u2019s worth it to have a director of AI just for that.\u201d<\/p>\n\n\n\n<p>It\u2019s government\u2019s responsibility, Raiche said, to establish policies that ensure its own AI use is ethical.<\/p>\n\n\n\n<p>But Raiche also argued that including AI in a project to redesign a clumsy paper process, for example, isn\u2019t necessarily meaningfully different than offloading work to a human intern. In either case, government must measure its outcomes and take care to be ethical.<\/p>\n\n\n\n<p>\u201cIt\u2019s still got to be the owner of the program who owns the process who also owns the technical tool that\u2019s used in that,\u201d he said. \u201cIn Vermont, we\u2019ve said very broadly it\u2019s generally OK to use AI for personal productivity things for your staff, but that\u2019s at the discretion of the supervisor based on where they think there\u2019s risk to damaging trust in Vermont\u2019s institutions or some other type of risk.\u201d<\/p>\n\n\n\n<p>Most AI challenges are arriving at government\u2019s doorstep unbidden. One state official shared with StateScoop an email they received from the graphic design platform Canva, which noted that hundreds of registered user accounts from their organization who were using the tool were \u201cat risk of exposing the state&#8217;s intellectual property as we begin training our AI with user content.\u201d The proposed solution: Consolidate the accounts and regain control of the state&#8217;s data through the purchase of an enterprise license.<\/p>\n\n\n\n<p>In an email, a Canva spokesperson told StateScoop this may have been a miscommunication or misunderstanding.<\/p>\n\n\n\n<p>\u201cBy default, all users are opted out of AI training, and we will never train on a user\u2019s private content without their permission,\u201d the company\u2019s statement read. \u201cWhen it comes to AI, we\u2019ve taken a careful and considered approach while continuing to invest heavily in trust and safety through Canva Shield, our industry-leading collection of trust, safety, and privacy tools. \u2026 Enterprise account admins can control whether data from users on their team can be used to train AI models or not, rather than leaving it up to the individual user.\u201d<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-it-s-not-happening\">&#8216;It&#8217;s not happening&#8217;<\/h3>\n\n\n\n<p>Utah Chief Technology Officer Chris Williamson said his biggest AI challenge is cleaning the state\u2019s datasets, which were never maintained for use in AI models. But he said the potential uses of state government&#8217;s \u201cwealth of data\u201d are endless. One theoretical example: correlating tax records with driver\u2019s license data to understand road congestion.<\/p>\n\n\n\n<p>\u201cI can now make a map of what a commuting network looks like for our environment within the state of Utah and I know what theoretical road congestion is going to look like,\u201d he said of the imagined project. \u201cThat\u2019s going to give me an idea of what I have to do for road infrastructure, architecture, if I have to do major road maintenance. What\u2019s going to happen when those individuals now have to take new arteries to get to work? And I could either plan or re-architect my road structure to manage those individuals, just by knowing where they live and theoretically where they\u2019re working.\u201d<\/p>\n\n\n\n<p>Beyond the technical challenges of undertaking such a project, Williamson said, it\u2019s not up to him to decide whether a project constitutes an ethical use of AI \u2014 that\u2019s the job of the state\u2019s lawmakers.<\/p>\n\n\n\n<p>\u201cIt\u2019s been put in code at the legislative level, but it\u2019s also been put in code at the computer level. And we built our systems around protecting that citizen data with some very strict barriers,\u201d he said.<\/p>\n\n\n\n<p>Massachusetts State Sen. Barry Finegold is among the legislators who\u2019ve <a href=\"https:\/\/malegislature.gov\/Bills\/193\/SD1827\">drafted bills<\/a> aimed at reining in AI. Finegold <a href=\"https:\/\/www.masslive.com\/politics\/2023\/01\/mass-lawmaker-uses-chatgpt-to-help-write-legislation-limiting-the-program.html\">garnered media attention<\/a> by using ChatGPT to help him write the text of his AI bills, which <a href=\"https:\/\/malegislature.gov\/Bills\/193\/SD2932\">include one<\/a> that would impose fines on political candidates who use deepfakes to deceive the public.<\/p>\n\n\n\n<p>\u201cFirst of all, this should be done on the federal level,\u201d Finegold said. \u201cI\u2019ll be the first to admit that, but it\u2019s not happening.\u201d<\/p>\n\n\n\n<p>In the absence of comprehensive federal AI legislation, states have been proactively governing generative AI with a gusto unseen in some previous technology revolutions. <a href=\"https:\/\/www.ncsl.org\/technology-and-communication\/artificial-intelligence-2024-legislation\">According to<\/a> the National Conference of State Legislatures, at least 40 states introduced AI bills during their 2024 legislative sessions.<\/p>\n\n\n\n<p>\u201cOnce upon a time, we thought Facebook was really cute,\u201d Finegold said. \u201cIt was like college kids, and we saw how powerful it was. And we should have put up guardrails like we have in place now. \u2026 I feel this time around with AI, we\u2019re better. But I\u2019m still concerned that AI is moving so quickly that even with our best efforts we\u2019re going to miss things.\u201d<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-bigger-than-all-of-us\">&#8216;Bigger than all of us&#8217;<\/h3>\n\n\n\n<p>Would Indiana Chief Data Officer Josh Martin strap AI on top of all of his state\u2019s data?<\/p>\n\n\n\n<p>\u201cAbsolutely not,\u201d he said. \u201cIt&#8217;s just not in a place where we understand where it all lives, who&#8217;s in charge of it, what the quality is of it, what&#8217;s valuable, what&#8217;s not. \u2026 Most of the metadata in these systems wasn\u2019t fully completed when they were developed. It just wasn&#8217;t a priority at the time.\u201d<\/p>\n\n\n\n<p>Helping states figure out how to get their data ready for AI, while keeping operations secure and ethical, is Keith Bauer\u2019s job. As managing director of Microsoft Public Sector, he said one of the most common questions he hears these days is: \u201cHow do we make sure we\u2019re using AI responsibly?\u201d<\/p>\n\n\n\n<p>\u201cIt\u2019s not something that Microsoft can solve, government can solve, users can solve, the public can solve,\u201d he said. \u201cIt really, truly is a collective effort by everybody.\u201d<\/p>\n\n\n\n<p>He pointed to Microsoft\u2019s <a href=\"https:\/\/cdn-dynmedia-1.microsoft.com\/is\/content\/microsoftcorp\/microsoft\/final\/en-us\/microsoft-brand\/documents\/Microsoft-Responsible-AI-Standard-General-Requirements.pdf?culture=en-gb&amp;country=gb\">Responsible AI Standard<\/a>, the policies it uses internally to ensure AI\u2019s accountability, transparency, fairness, reliability, safety, privacy, security and inclusivity. He pointed to Microsoft\u2019s <a href=\"https:\/\/cyberscoop.com\/white-house-ai-executive-order-cybersecurity\/\">ready public support<\/a> for the Biden administration\u2019s AI executive order. And he repeatedly declared his company\u2019s interest in being \u201cin lockstep with our government customers on their AI journey.\u201d<\/p>\n\n\n\n<p>\u201cWe don\u2019t train the underlying models when government customers use our technologies, and if a customer were to build their own generative AI solution, some of the things to take into consideration with that is the options they have for getting the results that they want in that generative AI solution,\u201d he said.<\/p>\n\n\n\n<p>Google maintains a similar list of <a href=\"https:\/\/ai.google\/responsibility\/principles\/\">AI principles<\/a> that includes advisements to \u201cbe socially beneficial\u201d and to \u201cuphold high standards of scientific excellence.\u201d<\/p>\n\n\n\n<p>Chris Hein, director of customer engineering for Google\u2019s public sector business, emphasized that the corporation he works for is separate from Google, allowing it closer alignment with any White House AI policies and the public sector at large. He gestured at comments made by Thomas Kurian, chief executive of Google Cloud, indicating an interest to build technologies explicitly with government in mind. And he pointed out that \u201ca vast majority\u201d of the company\u2019s commercial products are compliant with the FedRAMP government security standard.<\/p>\n\n\n\n<p>\u201cYou as a government agency, you can\u2019t necessarily do anything about \u2026 the training, about the different weighting and all those different kinds of things that are happening in the background of a large language model,\u201d he said. \u201cSo when you come to a vendor like Google, you\u2019re trusting that vendor to have a certain amount of ethical responsibility in the training and in the weighting of those models and how they\u2019ve been developed over time.\u201d<\/p>\n\n\n\n<p>Though government must outsource some of its ethical work if it wants to do AI, Hein said he thinks government can rest easy, because large companies developing the technology, like Google, share the same values.<\/p>\n\n\n\n<p>He also said most government agencies aren\u2019t interested in tuning their own models and prefer for things to work \u201cout of the box.\u201d<\/p>\n\n\n\n<p>\u201cWhen you\u2019re using technology, we have this \u2018shared fate\u2019 kind of model when we think of things like security,\u201d Hein said. \u201cGoogle is going to be responsible, as a technology provider, for ensuring that there\u2019s certain aspects of the system that you should not have to worry about as someone who is utilizing that cloud environment.\u201d<\/p>\n\n\n\n<p>Despite the assurances of Big Tech\u2019s public sector businesses, Delaware Chief Information Officer Gregory Lane said he doesn\u2019t believe the preferences of state government will have much influence on the future of AI technologies.<\/p>\n\n\n\n<p>\u201cIt\u2019s bigger than all of us, it\u2019s happening around us,\u201d Lane said. \u201cWe\u2019re not going to control it. I just got off a meeting where it was suggested we have a list of [AI] tools people can use, and my comment was that\u2019s like having a list of apps that are OK for your iPhone. That thing\u2019s going to grow and spread faster than you can keep up with it.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>State governments are forming new laws and policies that set &#8220;guardrails&#8221; to limit AI&#8217;s potential harms. But officials know their reach is limited.<\/p>\n","protected":false},"author":205,"featured_media":65618,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"disable_grayscale_images":true,"grayscale_contrast":0,"sponsored_content":false,"display_author_bio":true,"story_type":"","footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[4677,24394],"tags":[134,602,749,3973,5272,24150,24697,24938,24939,24940],"people":[],"special-report":[24959],"authors":[4696],"class_list":["post-65519","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-state","category-ai","tag-government","tag-microsoft","tag-google","tag-state-government","tag-artificial-intelligence-ai","tag-generative-ai","tag-ai-policy","tag-ai-laws","tag-canva","tag-ai-training-data","special-report-government-ai-2024","author-colin-wood"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v24.5 (Yoast SEO v24.5) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>AI is beyond government control | StateScoop<\/title>\n<meta name=\"description\" content=\"State governments are forming new laws and policies that set &quot;guardrails&quot; to limit AI&#039;s potential harms. But officials know their reach is limited.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/statescoop.com\/ai-government-policy-2024\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI is beyond government control | StateScoop\" \/>\n<meta property=\"og:description\" content=\"State governments are forming new laws and policies that set &quot;guardrails&quot; to limit AI&#039;s potential harms. But officials know their reach is limited.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/statescoop.com\/ai-government-policy-2024\/\" \/>\n<meta property=\"og:site_name\" content=\"StateScoop\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/StateScoop\/\" \/>\n<meta property=\"article:published_time\" content=\"2024-07-17T10:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-07-18T20:04:43+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1300\" \/>\n\t<meta property=\"og:image:height\" content=\"731\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Colin Wood\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@State_Scoop\" \/>\n<meta name=\"twitter:site\" content=\"@State_Scoop\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/statescoop.com\/ai-government-policy-2024\/\",\"url\":\"https:\/\/statescoop.com\/ai-government-policy-2024\/\",\"name\":\"AI is beyond government control | StateScoop\",\"isPartOf\":{\"@id\":\"https:\/\/statescoop.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/statescoop.com\/ai-government-policy-2024\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/statescoop.com\/ai-government-policy-2024\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg\",\"datePublished\":\"2024-07-17T10:00:00+00:00\",\"dateModified\":\"2024-07-18T20:04:43+00:00\",\"description\":\"State governments are forming new laws and policies that set \\\"guardrails\\\" to limit AI's potential harms. But officials know their reach is limited.\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/statescoop.com\/ai-government-policy-2024\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/statescoop.com\/ai-government-policy-2024\/#primaryimage\",\"url\":\"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg\",\"contentUrl\":\"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg\",\"width\":1300,\"height\":731,\"caption\":\"(Giannina Vera \/ Scoop News Group)\"},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/statescoop.com\/#website\",\"url\":\"https:\/\/statescoop.com\/\",\"name\":\"StateScoop\",\"description\":\"Latest news and events in state and local government technology\",\"publisher\":{\"@id\":\"https:\/\/statescoop.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/statescoop.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/statescoop.com\/#organization\",\"name\":\"StateScoop\",\"url\":\"https:\/\/statescoop.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/statescoop.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2023\/01\/StateScoop-Black.png\",\"contentUrl\":\"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2023\/01\/StateScoop-Black.png\",\"width\":1470,\"height\":186,\"caption\":\"StateScoop\"},\"image\":{\"@id\":\"https:\/\/statescoop.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/StateScoop\/\",\"https:\/\/x.com\/State_Scoop\",\"https:\/\/www.linkedin.com\/company\/statescoop\/\",\"https:\/\/www.youtube.com\/@StateScoop\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"AI is beyond government control | StateScoop","description":"State governments are forming new laws and policies that set \"guardrails\" to limit AI's potential harms. But officials know their reach is limited.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/statescoop.com\/ai-government-policy-2024\/","og_locale":"en_US","og_type":"article","og_title":"AI is beyond government control | StateScoop","og_description":"State governments are forming new laws and policies that set \"guardrails\" to limit AI's potential harms. But officials know their reach is limited.","og_url":"https:\/\/statescoop.com\/ai-government-policy-2024\/","og_site_name":"StateScoop","article_publisher":"https:\/\/www.facebook.com\/StateScoop\/","article_published_time":"2024-07-17T10:00:00+00:00","article_modified_time":"2024-07-18T20:04:43+00:00","og_image":[{"width":1300,"height":731,"url":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg","type":"image\/jpeg"}],"author":"Colin Wood","twitter_card":"summary_large_image","twitter_creator":"@State_Scoop","twitter_site":"@State_Scoop","schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/statescoop.com\/ai-government-policy-2024\/","url":"https:\/\/statescoop.com\/ai-government-policy-2024\/","name":"AI is beyond government control | StateScoop","isPartOf":{"@id":"https:\/\/statescoop.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/statescoop.com\/ai-government-policy-2024\/#primaryimage"},"image":{"@id":"https:\/\/statescoop.com\/ai-government-policy-2024\/#primaryimage"},"thumbnailUrl":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg","datePublished":"2024-07-17T10:00:00+00:00","dateModified":"2024-07-18T20:04:43+00:00","description":"State governments are forming new laws and policies that set \"guardrails\" to limit AI's potential harms. But officials know their reach is limited.","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/statescoop.com\/ai-government-policy-2024\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/statescoop.com\/ai-government-policy-2024\/#primaryimage","url":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg","contentUrl":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg","width":1300,"height":731,"caption":"(Giannina Vera \/ Scoop News Group)"},{"@type":"WebSite","@id":"https:\/\/statescoop.com\/#website","url":"https:\/\/statescoop.com\/","name":"StateScoop","description":"Latest news and events in state and local government technology","publisher":{"@id":"https:\/\/statescoop.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/statescoop.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/statescoop.com\/#organization","name":"StateScoop","url":"https:\/\/statescoop.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/statescoop.com\/#\/schema\/logo\/image\/","url":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2023\/01\/StateScoop-Black.png","contentUrl":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2023\/01\/StateScoop-Black.png","width":1470,"height":186,"caption":"StateScoop"},"image":{"@id":"https:\/\/statescoop.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/StateScoop\/","https:\/\/x.com\/State_Scoop","https:\/\/www.linkedin.com\/company\/statescoop\/","https:\/\/www.youtube.com\/@StateScoop"]}]}},"parsely":{"version":"1.1.0","canonical_url":"https:\/\/statescoop.com\/ai-government-policy-2024\/","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"AI is beyond government control","url":"http:\/\/statescoop.com\/ai-government-policy-2024\/","mainEntityOfPage":{"@type":"WebPage","@id":"http:\/\/statescoop.com\/ai-government-policy-2024\/"},"thumbnailUrl":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg?w=150&h=150&crop=1","image":{"@type":"ImageObject","url":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg"},"articleSection":"State","author":[{"@type":"Person","name":"Colin Wood","url":"https:\/\/statescoop.com\/author\/colin-wood\/"}],"creator":["Colin Wood"],"publisher":{"@type":"Organization","name":"StateScoop","logo":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2023\/01\/cropped-ss_favicon.png"},"keywords":["ai laws","ai policy","ai training data","artificial intelligence (ai)","canva","generative ai","google","government","microsoft","state government"],"dateCreated":"2024-07-17T10:00:00Z","datePublished":"2024-07-17T10:00:00Z","dateModified":"2024-07-18T20:04:43Z"},"rendered":"<script type=\"application\/ld+json\" class=\"wp-parsely-metadata\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@type\":\"NewsArticle\",\"headline\":\"AI is beyond government control\",\"url\":\"http:\\\/\\\/statescoop.com\\\/ai-government-policy-2024\\\/\",\"mainEntityOfPage\":{\"@type\":\"WebPage\",\"@id\":\"http:\\\/\\\/statescoop.com\\\/ai-government-policy-2024\\\/\"},\"thumbnailUrl\":\"https:\\\/\\\/statescoop.com\\\/wp-content\\\/uploads\\\/sites\\\/6\\\/2024\\\/07\\\/AI-report-ethics.jpg?w=150&h=150&crop=1\",\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/statescoop.com\\\/wp-content\\\/uploads\\\/sites\\\/6\\\/2024\\\/07\\\/AI-report-ethics.jpg\"},\"articleSection\":\"State\",\"author\":[{\"@type\":\"Person\",\"name\":\"Colin Wood\",\"url\":\"https:\\\/\\\/statescoop.com\\\/author\\\/colin-wood\\\/\"}],\"creator\":[\"Colin Wood\"],\"publisher\":{\"@type\":\"Organization\",\"name\":\"StateScoop\",\"logo\":\"https:\\\/\\\/statescoop.com\\\/wp-content\\\/uploads\\\/sites\\\/6\\\/2023\\\/01\\\/cropped-ss_favicon.png\"},\"keywords\":[\"ai laws\",\"ai policy\",\"ai training data\",\"artificial intelligence (ai)\",\"canva\",\"generative ai\",\"google\",\"government\",\"microsoft\",\"state government\"],\"dateCreated\":\"2024-07-17T10:00:00Z\",\"datePublished\":\"2024-07-17T10:00:00Z\",\"dateModified\":\"2024-07-18T20:04:43Z\"}<\/script>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/statescoop.com\/p.js"},"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/statescoop.com\/wp-content\/uploads\/sites\/6\/2024\/07\/AI-report-ethics.jpg","distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"StateScoop","distributor_original_site_url":"https:\/\/statescoop.com","push-errors":false,"_links":{"self":[{"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/posts\/65519","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/users\/205"}],"replies":[{"embeddable":true,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/comments?post=65519"}],"version-history":[{"count":22,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/posts\/65519\/revisions"}],"predecessor-version":[{"id":65692,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/posts\/65519\/revisions\/65692"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/media\/65618"}],"wp:attachment":[{"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/media?parent=65519"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/categories?post=65519"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/tags?post=65519"},{"taxonomy":"people","embeddable":true,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/people?post=65519"},{"taxonomy":"special-report","embeddable":true,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/special-report?post=65519"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/statescoop.com\/wp-json\/wp\/v2\/authors?post=65519"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}