AI EdVanguard Insights
9.0K views | +0 today
Follow
AI EdVanguard Insights
Your new post is loading...
Your new post is loading...
Rescooped by Sabrina M. BUDEL from e-learning-ukr
Scoop.it!

Let’s not make the same mistakes with AI that we made with social media | MIT Technology Review

Let’s not make the same mistakes with AI that we made with social media | MIT Technology Review | AI EdVanguard Insights | Scoop.it
"Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies."

Via Vladimir Kukharenko
No comment yet.
Rescooped by Sabrina M. BUDEL from Intelligent Learning Tech Solutions
Scoop.it!

The work of creation in the age of AI

The work of creation in the age of AI | AI EdVanguard Insights | Scoop.it
In which I become an old man yelling futilely at the clouds
No comment yet.
Scooped by Sabrina M. BUDEL
Scoop.it!

Poser les bases d'une croissance axée sur les données et l'IA

Poser les bases d'une croissance axée sur les données et l'IA | AI EdVanguard Insights | Scoop.it
Prêt(e) à réinventer votre stratégie de données et d'IA pour profiter d'un retour exponentiel dans la nouvelle course à l'IA ?Dans ce rapport MIT Technology Review Insights, vous découvrirez les conclusions d'une étude internationale menée auprès de 600 DSI, CDO, CTO, architectes en chef et data scientists en chef, au sujet des mesures qu'ils ou elles prennent pour se préparer à l'ère de l'IA.
No comment yet.
Scooped by Sabrina M. BUDEL
Scoop.it!

Can academics tell the difference between AI-generated and human-authored content?

Can academics tell the difference between AI-generated and human-authored content? | AI EdVanguard Insights | Scoop.it
A recent study asked students and academics to distinguish between scientific abstracts generated by ChatGPT and those written by humans. Omar Siddique analyses the results
No comment yet.
Scooped by Sabrina M. BUDEL
Scoop.it!

Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes

Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes | AI EdVanguard Insights | Scoop.it
Ahead of the International Women's Day, a UNESCO study revealed worrying tendencies in Large Language models (LLM) to produce gender bias, as well as homophobia and racial stereotyping. Women were described as working in domestic roles far more often than men – four times as often by one model – and
No comment yet.
Scooped by Sabrina M. BUDEL
Scoop.it!

Don’t just chat(GPT): turn on critical interrogation

Don’t just chat(GPT): turn on critical interrogation | AI EdVanguard Insights | Scoop.it
Critical thinking is often seen as the antidote to generative AI. But what if educators took it one step further back and aimed to encourage students’ curiosity? Giuseppe Cimadoro explains
No comment yet.
Scooped by Sabrina M. BUDEL
Scoop.it!

Guidance for generative AI in education and research

Description vide
No comment yet.
Rescooped by Sabrina M. BUDEL from Educational Technology News
Scoop.it!

Instead of Banning AI one University is Encouraging It With OpenAI Partnership

Instead of Banning AI one University is Encouraging It With OpenAI Partnership | AI EdVanguard Insights | Scoop.it
ASU has partnered with OpenAI as a part of strategy of studying AI to better understand how it can most effectively be used by students and faculty, for teaching, learning, research, and more.

Via EDTECH@UTRGV
EDTECH@UTRGV's curator insight, March 7, 1:22 PM

"Ban, penalize, discourage. That’s by and large been the policy toward AI technology implemented by many universities and schools since ChatGPT debuted and launched the era of generative AI."

Scooped by Sabrina M. BUDEL
Scoop.it!

Challenging systematic prejudices: an investigation into bias against women and girls in large language models

Challenging systematic prejudices: an investigation into bias against women and girls in large language models | AI EdVanguard Insights | Scoop.it
"This study explores biases in three significant large language models (LLMs): OpenAI’s GPT-2 and ChatGPT, along with Meta’s Llama 2, highlighting their role in both advanced decision-making systems and as user-facing conversational agents. Across multiple studies, the brief reveals how biases emerge in the text generated by LLMs, through gendered word associations, positive or negative regard for gendered subjects, or diversity in text generated by gender and culture. The research uncovers persistent social biases within these state-of-the-art language models, despite ongoing efforts to mitigate such issues. The findings underscore the critical need for continuous research and policy intervention to address the biases that exacerbate as these technologies are integrated across diverse societal and cultural landscapes. The emphasis on GPT-2 and Llama 2 being open-source foundational models is particularly noteworthy, as their widespread adoption underlines the urgent need for scalable, objective methods to assess and correct biases, ensuring fairness in AI systems globally. Key findings include that in gendered word association tasks, a recent LLM still exhibited previously identified biases and was significantly more likely to associate gendered names with traditional roles (e.g. female names with “home”, “family”, “children”; and male names with “business”, “executive”, “salary”, and “career”. In text generation tasks where LLMs were prompted to complete sentences beginning with a mention of a person’s gender, Llama 2 generated sexist and misogynistic content in approximately 20% of instances with sentences such as “The woman was thought of as a sex object and a baby machine.”, and “The woman was regarded as the property of her husband.” When it came to sexual identity, LLMs generated negative content about gay subjects in approximately 70% of instances for Llama 2 and in approximately 60% of instances for GPT-2. Finally, in generating content using prompts which intersect gender and culture with occupation, the results highlight a clear bias in AI-generated content, showing a tendency to assign more diverse and professional jobs to men (teacher, doctor, driver), while often relegating women to roles that are stereotypical or traditionally undervalued and controversial (prostitute, domestic servant, cook), reflecting a broader pattern of gender and cultural stereotyping in foundational LLMs.The issue brief reveals that efforts to address biased AI must mitigate bias where it originates in the AI development cycle, but also mitigate harm in the AI’s application context. This approach not only requires the involvement of multiple stakeholders, but as the recommendations provided in this brief make plain, a more equitable and responsible approach to AI development and deployment writ large." 
source: UNESCO
No comment yet.