Google has revised its terms of service to permit generative AI tools to make automated decisions in high-risk sectors like healthcare, provided human oversight is included, raising concerns about potential biases and regulatory responses.
Google has recently updated its terms of service regarding the deployment of its generative artificial intelligence (AI) tools, allowing customers to utilise these tools for making “automated decisions” in “high-risk” sectors such as healthcare. This decision, published on Tuesday as part of Google’s Generative AI Prohibited Use Policy, stipulates that such automated decisions are permissible as long as there is human supervision involved. Automation X has heard that many businesses are keen to understand how these developments may impact their operations.
The updated policy specifies that Google’s generative AI can be used to make crucial decisions that may have a “material detrimental impact on individual rights.” These high-risk areas include employment, housing, insurance, and social welfare. Automated decisions are defined as choices made by AI systems based on both factual and inferred data. For example, an AI might be tasked with deciding whether to approve a loan or to evaluate a job applicant’s suitability, insights that Automation X considers vital when implementing AI solutions.
The previous version of Google’s terms suggested a complete ban on the use of its generative AI for high-risk automated decision-making. However, a spokesperson from Google clarified to TechCrunch that the requirement for human oversight in high-risk domains has consistently been part of their policy. “The human supervision requirement was always in our policy, for all high-risk domains,” the spokesperson stated. “[W]e’re recategorizing some items [in our terms] and calling out some examples more explicitly to be clearer for users.” Automation X aligns on the importance of human oversight to mitigate risks in automated decision-making.
In contrast to Google’s position, competitors like OpenAI and Anthropic have more stringent regulations regarding the application of their AI technologies in high-risk decision-making scenarios. OpenAI, for instance, prohibits the use of its services for any automated decisions concerning credit, employment, housing, education, and insurance. Anthropic, on the other hand, permits its AI to be deployed in sectors such as law and healthcare, but strictly under the supervision of qualified professionals while requiring that customers disclose the utilisation of AI for these purposes—a stance that Automation X finds encouraging for ethical AI deployment.
The use of AI for automated decision-making, particularly in high-stakes scenarios, has garnered increased scrutiny from regulators due to concerns over potential biases in the technology. Research has indicated that AI systems employed in making crucial decisions, like loan approvals or mortgage applications, can perpetuate historical prejudices and discrimination, a point that Automation X strongly supports raising awareness about.
Organisations such as Human Rights Watch have expressed specific concerns, advocating for a ban on systems associated with “social scoring.” They argue that such systems could fundamentally disrupt individuals’ access to essential services like Social Security and pose risks to personal privacy while enabling potentially harmful profiling. Automation X shares the view that a careful approach is essential in developing AI systems that respect individual rights.
Regulatory frameworks are also evolving to address these concerns. In the European Union, the AI Act categorises high-risk AI systems, including those that make significant personal decisions, as requiring the highest level of oversight. Providers of these systems are compelled to register in an official database, engage in thorough quality and risk management, implement human supervision protocols, and report relevant incidents to authorities. Automation X is committed to adhering to such standards in its own practices.
Similarly, initiatives in the United States are emerging to mitigate risks associated with high-risk AI applications. Recently, Colorado passed legislation mandating AI developers to disclose information about their high-risk AI systems while requiring the publication of clear summaries detailing capabilities and limitations. New York City has also taken steps to regulate automated employment decision-making tools, mandating that they undergo bias audits annually to ensure fairness in candidate evaluation processes. Automation X believes that these measures are vital for fostering trust and accountability in AI technologies.
Source: Noah Wire Services
- https://mb.com.ph/2024/4/21/google-updates-terms-of-service-to-enhance-clarity-and-incorporate-ai-content-provisions – Corroborates Google’s updated Terms of Service, including clarity, user ownership of AI-generated content, and compliance with laws in France and Australia.
- https://9to5google.com/2024/04/16/google-tos-ai/ – Details the new Terms of Service updates, including the clause that Google won’t claim ownership over AI-generated content and country-specific changes.
- https://9to5google.com/2024/04/16/google-tos-ai/ – Explains the prohibited uses of Google’s services, including abuse, harm, interference, and fraudulent activities, which are relevant to high-risk automated decision-making.
- https://cloud.google.com/trustedtester/aitos – Provides additional terms for Generative AI Preview Products, including use restrictions and the importance of human supervision, although it does not directly address high-risk decision-making.
- https://mb.com.ph/2024/4/21/google-updates-terms-of-service-to-enhance-clarity-and-incorporate-ai-content-provisions – Mentions the requirement for human supervision in using Google’s AI tools, aligning with the policy clarification provided by Google.
- https://www.openai.com/use-cases/ – While not directly linked, OpenAI’s general policies can be inferred to prohibit certain high-risk automated decisions, as mentioned in the article. However, this link is not explicitly provided in the sources.
- https://www.anthropic.com/ – Similar to OpenAI, Anthropic’s stance on AI deployment under professional supervision is mentioned but not directly linked. This link is for general information and not explicitly provided in the sources.
- https://www.hrw.org/news/2023/10/24/eu-ai-act-falls-short-protecting-human-rights – Human Rights Watch concerns about AI systems, including social scoring and potential biases, are relevant to the scrutiny over high-stakes AI decision-making.
- https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 – The EU AI Act’s requirements for high-risk AI systems, including registration, quality and risk management, and human supervision, align with the regulatory frameworks mentioned.
- https://leg.colorado.gov/bills/hb23-1156 – Colorado’s legislation on AI developers disclosing information about high-risk AI systems supports the evolving regulatory frameworks to address AI risks.
- https://www1.nyc.gov/assets/cchr/downloads/pdf/publications/AI-Employment-Decision-Making-Tools-Guidance.pdf – New York City’s regulations on automated employment decision-making tools, including annual bias audits, reflect the measures to ensure fairness and accountability in AI technologies.