Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

OpenAI warns not to use ChatGPT for hiring analysis

For its part, OpenAI has warned that its product shouldn’t be used for hiring decisions — or any procedure that could violate someone’s human rights.

The latest findings are in line with several recent studies that show AI can give deferential treatment to candidates of certain demographics. Most experts think these biases are embedded in the vast corpus of material used to train AI.

OpenAI and its competitors have said they’re working to fix the problem. In the meantime, recruiters can try putting their own guardrails in place — like removing names before running applications through AI platforms.

AI Deep Fake Image Challenges & Solutions

https://www.yahoo.com/finance/news/one-tech-tip-spot-ai-052451444.html

Experts say it might even be dangerous to put the burden on ordinary people to become digital Sherlocks because it could give them a false sense of confidence as it becomes increasingly difficult, even for trained eyes, to spot deepfakes.

USING AI TO FIND THE FAKES - limited access to these automated tools

Another approach is to use AI to fight AI.

Microsoft has developed an authenticator tool that can analyze photos or videos to give a confidence score on whether it's been manipulated. Chipmaker Intel's FakeCatcher uses algorithms to analyze an image's pixels to determine if it's real or fake.

There are tools online that promise to sniff out fakes if you upload a file or paste a link to the suspicious material. But some, like Microsoft's authenticator, are only available to selected partners and not the public. That's because researchers don't want to tip off bad actors and give them a bigger edge in the deepfake arms race.

AI Governance Concepts

 

ADGM > Automated Digital Governance Manager from SWT

...

Candidate Solutions

OpenLLama - Open LLM

https://skywebteam.atlassian.net/wiki/spaces/KHUB/pages/2540732417/OpenLlama+LLM

Securiti - steps to responsible AI

...