AI Now and Next: Understanding What People Want From Artificial Intelligence

CN
@aidevelopercodeCreated on Thu Sep 04 2025
AI Now and Next: Understanding What People Want From Artificial Intelligence

AI Now and Next: Understanding What People Want From Artificial Intelligence

Artificial intelligence (AI) is no longer just a futuristic concept; it’s a part of our daily lives. From smartphones to schools and hospitals, AI is everywhere. As these technologies grow smarter and more prevalent, a fundamental question arises: what do people truly desire from AI today and in the future? This guide seeks to ground this discussion, emphasizing usefulness, safety, fairness, and accountability, backed by practical examples and reliable sources.

Why This Discussion Matters Now

The adoption of AI is rapidly increasing across various sectors and in daily life. Governments are enacting new regulations, companies are launching AI features weekly, and employees are using AI tools to enhance their productivity. In such a dynamic environment, it’s crucial to align advancements in AI with the values and desires of the public.

As these frameworks develop, the public expects AI to provide real value without compromising safety, privacy, or fairness. The following sections explore these expectations and outline the steps needed to meet them.

What People Want From AI Today

Through surveys, policy debates, and workplace experiences, several key themes have emerged regarding what people want from AI: utility, reliability, respect for rights, and accountability.

1. Trustworthy Utility

AI should facilitate productivity in ways that are fast, clear, and dependable. This includes tasks like drafting documents, data analysis, customer support, coding assistance, and creative brainstorming. Furthermore, AI systems should be transparent about their outputs, allowing users to verify sources easily.

  • Provide verifiable results. Cite sources, outline steps, and include links to supporting evidence wherever possible.
  • Adhere to safety protocols. Tools should avoid processing unsafe or overtly harmful requests and guide users towards safer options.
  • Encourage human oversight. Users should be able to review, edit, and override AI outputs effortlessly.

2. Privacy By Default

Individuals want to engage with AI without the fear of their sensitive data being misused or retained indefinitely. Clear data practices, minimal data collection, and strong security measures are essential. Emerging standards like the NIST AI Risk Management Framework offer practical guidelines for privacy and governance in AI development and deployment (NIST AI RMF).

3. Fairness and Inclusion

AI systems must cater to diverse populations, encompassing various languages, cultures, and contexts. This entails measuring and mitigating bias, enhancing performance in low-resource languages, and being transparent about model limitations. Global frameworks, such as the OECD AI Principles and UNESCO’s ethics recommendations, stress fairness and human rights.

4. Transparency and Accountability

Users want clarity regarding when they’re interacting with AI, the data utilized, the evaluation process, and channels for reporting issues. Many governance strategies now mandate transparency reports and impact assessments, including the EU AI Act and the US Executive Order.

5. Reliability and Safety

AI models should avoid fabricating information, consistently follow instructions, and effectively manage edge cases. Utilizing benchmarks and safety tests can aid in ensuring reliability. Resources like the Stanford AI Index monitor capabilities and risks across evaluations, while the NIST AI RMF outlines actionable steps for identifying, measuring, and mitigating risks.

Concerns People Have and Desired Solutions

Worries surrounding AI are concrete and evident in products and media coverage. Addressing these concerns is vital for building trust.

1. Misinformation and Deepfakes

Users seek clarity regarding the origins of digital content: who produced it, when, and how. The C2PA standard for content credentials enables the attachment of tamper-evident metadata to images, audio, and video. While watermarking synthetic media is an active research area, no singular method is foolproof across all transformations currently. However, significant advancements are expected in the realm of provenance, detection, and disclosure as platforms adopt these strategies at scale.

2. Bias, Fairness Gaps, and Civil Rights

Bias in AI can lead to harmful outcomes across various sectors, like hiring, credit scoring, and healthcare. Policymaking is catching up: the US Executive Order mandates civil rights protections within AI systems, offering clear guidelines for housing, credit, and employment decisions (White House). The EU AI Act also requires transparency and oversight for high-risk applications (EU).

3. Jobs, Productivity, and Inequality

AI is poised to transform the job landscape. The IMF estimates that about 40 percent of global jobs are at risk due to AI, suggesting potential for both displacement and productivity enhancement. According to the World Economic Forum, there is a growing demand for analytical thinking, AI literacy, and creative problem-solving. Meanwhile, McKinsey indicates that generative AI could generate trillions of dollars in annual economic value if deployed responsibly. To ensure a just transition, there is a call for reskilling, worker protections, and shared benefits.

4. Security and Misuse

As AI capabilities expand, so do risks such as automated phishing, model theft, and the misuse of models in harmful applications. Governments and research labs are enhancing their evaluation processes and safety measures to analyze potential threats before releasing products. The 2023 UK AI Safety Summit led to the establishment of the Bletchley Declaration, where numerous countries committed to advancing AI safety research and governance.

5. Energy and Computing Demands

Training and deploying large AI models can be energy-intensive. The International Energy Agency forecasts that global electricity demand from data centers, AI included, may double by 2026 if current trends persist. People are urging for increased efficiency, transparency regarding environmental impact, and greater adoption of clean energy.

Future Aspirations for AI

Looking beyond immediate concerns, there exists a broader vision for AI that enhances human potential and opportunity in significant ways.

1. Human-Centered Co-Pilots Instead of Replacements

The most effective AI tools will serve as co-pilots that support human judgment rather than replace it. This means creating smooth transitions, providing clear previews and controls, and maintaining human oversight. For instance, in coding, co-pilots can speed up repetitive tasks while developers oversee security and performance. In healthcare, clinical decision support tools should offer explanations and cite sources, allowing clinicians to validate recommendations, in line with the guidance from the World Health Organization.

2. More Personalized and Equitable Education

Many individuals desire AI tutors that adapt to their pace, language, and learning style without compromising data privacy. AI should also help educators save time on administrative duties and create accessible materials. Thoughtful design is crucial, ensuring transparency in content sources, opt-in data usage, and well-defined boundaries regarding grading or significant decisions.

3. Accelerated Scientific Research and Improved Health

AI holds the potential to speed up research processes in areas like protein design, drug discovery, and climate modeling. In clinical settings, it promises earlier diagnoses, enhanced triage accuracy, and better access through remote tools. WHO guidelines underscore the importance of safety, effectiveness, data quality, and equitable access in AI for health (WHO).

4. Climate Solutions and Building Resilience

AI can optimize energy consumption, minimize waste, and enhance renewable forecasting. Additionally, it can aid in climate adaptation by improving disaster responses. The challenge is to minimize the environmental footprint of AI itself while leveraging the technology to support sustainability initiatives. The IEA advocates for efficiency improvements and clarity in data center energy consumption (IEA).

5. Accessibility and Inclusion by Design

There is a pronounced desire for AI that enhances accessibility, including live captions that recognize accents, image descriptions, multi-language voice interfaces, and privacy-respecting assistive technologies. Creating inclusive datasets, localized evaluations, and engaging diverse communities in testing are vital to achieving this.

Governance People Can Trust

As technology evolves, people are seeking safeguards that are pragmatic, globally consistent, and adaptable.

1. Clear, Risk-Based Regulations

The EU AI Act employs a risk-based framework, introducing stricter requirements for high-risk uses. The US Executive Order calls for agencies to establish standards for safety, civil rights, healthcare, and cybersecurity. Alongside the NIST AI RMF and the newly introduced ISO/IEC 42001 AI management system standard, organizations have a growing toolkit for responsible AI deployment.

2. Practical Transparency and Oversight

Result-oriented governance matters more than just promises. This entails model cards, system evaluations, incident reporting, and third-party testing for high-risk systems. Public registers for high-risk applications can enhance accountability, as proposed in the EU framework.

3. Open Research and Shared Safety Tools

Despite varying approaches, there is a prevailing support for increased openness in safety research—sharing benchmarks, red-teaming methods, and evaluation datasets. The Stanford AI Index and similar academic efforts provide public baselines that facilitate policymakers and practitioners in tracking progress and managing risks.

Technical Priorities Reflecting Human Values

To fulfill public expectations, the AI community must prioritize features and methodologies that translate values into actionable product behavior.

  • Alignment and Instruction Following. Models should adhere to user intent within safety parameters and defer to human authority.
  • Robustness and Reliability. Minimize inaccuracies and enhance consistency across various prompts, equipped with clear uncertainty indicators.
  • Transparency and Interpretability. Provide straightforward explanations, references, and justifications where feasible, and disclose known limitations.
  • Privacy-Preserving Techniques. Implement data minimization, set retention limits, and utilize methods like differential privacy or federated learning as appropriate.
  • Security by Design. Strengthen models and infrastructure against threats like prompt injection and data exfiltration, and document the responses.
  • Safety Evaluations and Red-Teaming. Test for potential hazards and misuse pathways prior to product release, and regularly update safety measures.
  • Content Provenance and Disclosure. Integrate C2PA content credentials and ensure users are informed about AI-generated media.
  • Efficiency and Sustainability. Monitor and disclose energy use for notable training operations, and optimize inference efficiency to reduce costs and environmental impact.

Economic Opportunities Where AI Can Help

People desire AI systems that save time, cut costs, and enhance outcomes—it’s most effective when integrated into workflows rather than being an afterthought.

1. Knowledge Work and Co-Creation

Generative AI excels in drafting, summarizing information, data manipulation, and code generation. Studies and industry findings suggest significant productivity improvements when humans are involved in the process (McKinsey).

2. Enhanced Customer Experience and Operations

AI can expedite support interactions, streamline issue routing, and personalize responses while maintaining pathways for human escalation. In operations, AI contributes to enhanced forecasting, quality control, and detection of anomalies through large-scale pattern recognition.

3. Assistance for Small Businesses and Nonprofits

For smaller entities, AI can equalize opportunities: automating back-office processes, aiding marketing efforts, and providing basic data analysis without the need for a dedicated data science team. The focus should be on offering affordable tools with clear limitations and privacy guarantees.

4. Enhancements in Public Sector and Social Impact

Governments can leverage AI to improve service delivery, decrease administrative burdens, and broaden access to information without infringing on civil rights or due process. Transparent procurement, pilot programs, and independent evaluations can guarantee benefits while minimizing unintended consequences.

Global Equity: Making AI Work for Everyone

Technological progress must not exacerbate the digital divide. To make AI beneficial for a larger demographic, investment in infrastructure, data quality, and local capacity is essential.

  • Language Inclusion. Promote support for a broader range of languages and dialects, particularly those that are low-resourced, while testing performance in local contexts.
  • Affordable Access. Expand connectivity, computational resources, and educational opportunities to empower more communities to build with AI rather than simply consuming it.
  • Open Education. Disseminate curricula, tutorials, and initial datasets to cultivate local talent and foster entrepreneurship.
  • Global Norms, Local Context. Adhere to overarching principles like the OECD AI Principles while adapting to regional needs through community engagement and governance.

How to Implement AI Responsibly Today

You don’t have to wait for stringent regulations to begin using AI responsibly. Here are some actionable practices for individuals and teams:

  • Verify Important Outputs. For critical decisions, check sources, conduct spot checks, and utilize multiple tools.
  • Safeguard Sensitive Data. Avoid entering confidential or personal information into tools that could retain that data. Use enterprise-level features with clear data controls.
  • Be Transparent About AI Use. In professional and educational settings, disclose when you leveraged AI and in what capacity.
  • Establish Guardrails. Define acceptable uses, risk thresholds, and escalation procedures, even for small teams.
  • Measure Benefits and Risks. Track the time saved, accuracy improvements, and error rates to identify where AI is truly beneficial and where it may fall short.

Indicators of Positive Progress in AI

People seek clear indicators that advancements in AI translate to improvements in human life. Here are key signs to monitor:

  • Fewer critical incidents, enhanced transparency in reporting, and quicker resolutions.
  • Declining hallucination rates and improved uncertainty indicators in daily tasks.
  • Increased language support and better fairness metrics across demographics.
  • The widespread adoption of content provenance practices and clearer labeling for synthetic media across leading platforms.
  • Reduced energy and costs per valuable outcome, especially in high-volume environments.
  • More universally accessible products, with measurable improvements in inclusion.
  • Growing alignment between global principles and everyday practical applications.

Conclusion: Progress That Resonates

The expectations of the world for AI are clear. People want tools that are helpful and trustworthy, systems that honor their rights, and innovations that genuinely enhance their lives. Achieving this vision requires a blend of product craftsmanship, scientific integrity, and thoughtful governance. By getting the fundamentals right today, the next wave of AI innovation can feel like a shared journey toward collective progress.

FAQs

Is AI going to replace most jobs?

AI will transform many jobs and tasks, but most roles will evolve rather than be eliminated. The IMF estimates that approximately 40 percent of global jobs are vulnerable to AI, presenting both risks and potential productivity gains. Reskilling and human-centered design will be crucial (IMF).

How can I identify AI-generated content?

Look for labels and content credentials. The C2PA standard enables publishers to attach tamper-evident provenance data. Although detection and watermarking technologies are improving, the combination of provenance and transparency remains the most reliable option today (C2PA).

Are governments collaborating on AI safety?

Yes. The EU AI Act establishes a comprehensive framework, the US has enacted an Executive Order along with agency guidelines, and numerous countries have endorsed the Bletchley Declaration to propel AI safety research and governance (EU, US, Bletchley Declaration).

What does responsible AI look like within a company?

Responsible AI frameworks typically include policies aligned with the NIST AI RMF, risk assessments for significant use cases, human oversight, incident processes, transparency disclosures, and continuous model evaluations (NIST, ISO/IEC 42001).

Is AI sustainable?

This depends on design choices and the scale of deployment. AI can promote efficiency and aid in climate initiatives, but it also requires substantial energy. The IEA advocates for improved data center efficiency and the use of cleaner energy sources (IEA).

Sources

  1. EU AI Act
  2. US Executive Order on AI
  3. NIST AI Risk Management Framework
  4. UNESCO Recommendation on the Ethics of AI
  5. OECD AI Principles
  6. WHO Guidance: Ethics and Governance of AI for Health
  7. Stanford AI Index 2024
  8. IEA: Data Centers and AI
  9. IMF: GenAI and Jobs
  10. World Economic Forum: Future of Jobs Report 2023
  11. McKinsey: The Economic Potential of Generative AI
  12. C2PA: Content Credentials and Provenance
  13. Bletchley Declaration
  14. ISO/IEC 42001: AI Management System

Thank You for Reading this Blog and See You Soon! 🙏 👋

Let's connect 🚀

Newsletter

Your Weekly AI Blog Post

Subscribe to our newsletter.

Sign up for the AI Developer Code newsletter to receive the latest insights, tutorials, and updates in the world of AI development.

Weekly articles
Join our community of AI and receive weekly update. Sign up today to start receiving your AI Developer Code newsletter!
No spam
AI Developer Code newsletter offers valuable content designed to help you stay ahead in this fast-evolving field.