Azure DevOps Podcast: Responsible AI – Episode 250

Topics of Discussion:

[3:01] Greg talks about being a military veteran from the first Gulf War and then transitioning into the technology arena.

[3:33] Giving back to the veteran community.

[6:04] Is AI inherently irresponsible?

[6:30] Greg defines responsible AI.

[7:02] Thinking about AI as your personal assistant, but only presenting you with the facts.

[8:53] The difference between the public models set out by the big companies, and the other aspect of creating your own model by choosing your own set of data using the GPT technology to analyze that data.

[16:43] Hallucinations in AI and GPT models.

[17:10] What is actionable right now for developers when they are designing it so that we can have some safeguards built in?

[21:55] The difference between fact and affirmation.

[23:41] The system shouldn’t just give us what we want, but it should be able to route that want into something that’s factual.

[33:10] The design process for developers that want to create their own model.

[37:11] Does Greg have any Chat GPT models?

Azure DevOps Podcast: Greg Leonardo: Responsible AI – Episode 250 (clear-measure.com)