Practice

Applying AI Ethics and Safety

1. Case Study Analysis

  • Analyze real-world examples of AI misuse (e.g., biased hiring algorithms, facial recognition controversies, misinformation spread).

  • Identify ethical concerns and propose ways to mitigate them.

  • Example cases:

    • COMPAS Algorithm Bias in Criminal Justice

    • Amazon’s AI Recruiting Bias Issue

2. Bias Detection in AI Models

  • Use an AI dataset and check for biases in training data.

  • Apply fairness metrics using Python libraries (Fairlearn, AIF360).

  • Compare model performance on different demographic groups.

3. AI Privacy and Security Assessment

  • Conduct a privacy audit of an AI system:

    • What data is collected?

    • How is it stored and shared?

    • Are there compliance risks (GDPR, CCPA)?

  • Simulate a data breach scenario and suggest mitigation steps.

4. Ethical AI Decision-Making Scenarios

  • Provide hypothetical dilemmas where users must decide on AI-related ethical issues.

    • Should an AI system prioritize fairness over accuracy?

    • If an AI chatbot spreads false information, who is responsible?

    • Should an AI tool explain its decisions even if it makes it less effective?

5. AI Governance and Policy Writing

  • Draft an AI Ethics Policy for a company adopting AI solutions.

  • Include guidelines on bias reduction, transparency, and risk assessment.

6. Hands-On AI Explainability Task

  • Take an AI model and use explainability tools (SHAP, LIME) to analyze why it makes certain decisions.

  • Visualize decision-making patterns and discuss their ethical implications.

Last updated