For startups developing or using artificial intelligence, good business hinges on establishing Responsible AI practices.
1. Let's start with a Code of Conduct.
Startups have different needs and constraints when it comes to Responsible AI than established companies. Due to limited resources for evaluation and implementation, startups need fast, effective and continuous evaluation of their AI activities in a fast-paced environment.
AIEthica presents the Startup RAI Package, carefully tailored to meet the unique needs of startups. This package delivers key Responsible AI assessment criteria in a concise, proven format, fine-tuned for startups with limited time and resources. It provides a comprehensive package of services, available at a fixed price, that includes the following steps
Startup RAI Package
1. Definition of the Code of Conduct 2. AI risk and impact assessment 3. Pre-assessment of AI Act risk category 4. Data Governance 5. Monitoring Services
2. Together, we'll navigate the AI risk and impact assessment.
The AI risk and impact assessment activity takes a high-level view of the development or deployment of AI applications. It provides an initial assessment of the risk profile and potential hazards of an application. Key factors to consider include the use of personal data, social context, stakeholders, and accountability structure.
Use of personal data
Social & political context
Characteristic of technology
Transparency & explainability
Stakeholder analysis
Accountability structure
3. It's time to explore the AI Act's risk category assessment.
The EU AI Act, which will regulate AI in the European Union, is primarily based on a risk category scheme. So-called "high-risk" applications are the main focus of the regulation, and most regulatory measures are targeted at this risk profile.
Although a final risk classification is not yet possible, it is advisable to start a risk pre-assessment now, as most of the factors of a high-risk application are already available. An early assessment will save time later, especially for a startup working in a time-sensitive environment. The following activities are part of this process:
Industry specific risk category assessment
AI Act category requirements
Compliance preparation
4. Finally, we will look at data governance for startups.
Data are at the core of every AI application. Therefore, data management, accountability, risk mitigation, and privacy are also key activities on the path to Responsible AI. Although many data-centric startups already have some sort of data management in place, Responsible AI requires additional efforts to be implemented in the process, such as the right of users to have their data deleted, which is not an easy problem to solve in most data management structures. The fourth section of the Startup Package consists of the following components:
Accountable data workflow
Data governance and privacy
Data accuracy assessement
Model Risk Management
QuantPi - Monitoring your AI compliance
AIEthica partners with QuantPI, a leading platform for AI trust, to deliver AI Governance during the operational phase of AI applications. QuantPi is deployed for continous monitoring and risk management of AI applications during the operational phase of the lifecycle.
Automatically translating regulations into operations
QuantPi is the only monitoring and risk management solution for AI that allows to directly translate regulations like the AI Act and others into operational evaluation metrics.
Trust Profiles enable the acceleration and operationalization of any requirement for AI models/systems, regardless of whether the AI was built or procured. This equips AI Ethics/Risk/Compliance or business personas with an automated tool to govern AI models at scale without slowing down production.
Get in Touch
Whether you're embarking on an AI project, looking for employee training or wish to implement model risk management, we're here to assist. Just get in touch.
Address:
AIEthica Alte Dorfstrasse 24 CH - 8704 Herrliberg Switzerland