Using AI Responsibly
Maryland’s vision for responsible AI
AI is evolving rapidly - both the opportunities it affords, and the risks it presents - and will continue doing so for the foreseeable future. From the start, the State’s efforts have been driven by the imperative to use this powerful technology in ways that are responsible, ethical, beneficial, and trustworthy - and to build the capabilities that ensure that this is the case.
In January 2024, Governor Moore signed an executive order that roots all agency use of AI in a set of fundamental principles. Our policies and governance have all built from this baseline to ensure we “first do no harm.”
Fairness and equity
The State's use of AI must take into account the fact that AI systems can perpetuate harmful biases, and take steps to mitigate those risks, in order to avoid discrimination or disparate impact to individuals or communities based on their race, color, ethnicity, sex, religion, age, ancestry or national origin, disability, veteran status, marital status, sexual orientation, gender identity, genetic information, or any other classification protected by law.
Innovation
When used responsibly and in human-centered and mission-aligned ways, AI has the potential to be a tremendous force for good. The State commits to exploring ways AI can be leveraged to improve State services and resident outcomes.
Privacy
Individuals' privacy rights should be preserved by design in the State's use of AI, while ensuring that data creation, collection, and processing are secure and in line with all applicable laws and regulations.
Safety, security, and resiliency
AI presents new challenges and opportunities for ensuring the safety and security of Maryland residents, infrastructure, systems, and data. The State commits to adopting best practice guidelines and standards to surface and mitigate safety risks stemming from AI, while ensuring AI tools are resilient to threats.
Validity and reliability
AI systems can change over time. The State should have mechanisms to ensure that these systems are working as intended, with accurate outputs and robust performance.
Transparency, accountability, and explainability
The State's use of AI should be clearly and regularly documented and disclosed, in order to enable accountability. The outputs of AI systems in use by the State should be explainable and interpretable to oversight bodies and residents, with clear human oversight.