Workday: It’s Time to Close the AI Trust Gap

2024年01月11日 1576次浏览
Workday, a leading provider of enterprise cloud applications for finance and human resources, has pressed a global study recently recognizing the  importance of addressing the AI trust gap. They believe that trust is a critical factor when it comes to implementing artificial intelligence (AI) systems, especially in areas such as workforce management and human resources.



Research results are as follows:

  • At the leadership level, only 62% welcome AI, and only 62% are confident their organization will ensure AI is implemented in a responsible and trustworthy way. At the employee level, these figures drop even lower to 52% and 55%, respectively.

  • 70% of leaders say AI should be developed in a way that easily allows for human review and intervention. Yet 42% of employees believe their company does not have a clear understanding of which systems should be fully automated and which require human intervention.

  • 1 in 4 employees (23%) are not confident that their organization will put employee interests above its own when implementing AI. (compared to 21% of leaders)

  • 1 in 4 employees (23%) are not confident that their organization will prioritize innovating with care for people over innovating with speed. (compared to 17% of leaders)

  • 1 in 4 employees (23%) are not confident that their organization will ensure AI is implemented in a responsible and trustworthy way. (compared to 17% of leaders)


“We know how these technologies can benefit economic opportunities for people—that’s our business. But people won’t use technologies they don’t trust. Skills are the way forward, and not only skills, but skills backed by a thoughtful, ethical, responsible implementation of AI that has regulatory safeguards that help facilitate trust.” said Chandler C. Morse, VP, Public Policy, Workday.



Workday’s study focuses on various key areas:

Section 1: Perspectives align on AI’s potential and responsible use.

“At the outset of our research, we hypothesized that there would be a general alignment between business leaders and employees regarding their overall enthusiasm for AI. Encouragingly, this has proven true: leaders and employees are aligned in several areas, including AI’s potential for business transformation, as well as efforts to reduce risk and ensure trustworthy AI.”

Both leaders and employees believe in and hope for a transformation scenario* with AI.

Both groups agree AI implementation should prioritize human control.

Both groups cite regulation and frameworks as most important for trustworthy AI.

Section 2: When it comes to the development of AI, the trust gap between leaders and employees diverges even more.

“While most leaders and employees agree on the value of AI and the need for its careful implementation, the existing trust gap becomes even more pronounced when it comes to developing AI in a way that facilitates human review and intervention.”

Employees aren’t confident their company takes a people-first approach.

At all levels, there’s the worry that human welfare isn’t a leadership priority.

Section 3: Data on AI governance and use is not readily visible to employees.

“While employees are calling for regulation and ethical frameworks to ensure that AI is trustworthy, there is a lack of awareness across all levels of the workforce when it comes to collaborating on AI regulation and sharing responsible AI guidelines.”

Closing remarks: How Workday is closing the AI trust gap.

Transparency: Workday can prioritize transparency in their AI systems. Providing clear explanations of how AI algorithms make decisions can help build trust among users. By revealing the factors, data, and processes that contribute to AI-driven outcomes, Workday can ensure transparency in their AI applications.

Explainability: Workday can work towards making their AI systems more explainable. This means enabling users to understand the reasoning behind AI-generated recommendations or decisions. Employing techniques like interpretable machine learning can help users comprehend the logic and factors influencing the AI-driven outcomes.

Ethical considerations: Working on ethical frameworks and guidelines for AI use can play a crucial role in closing the trust gap. Workday can ensure that their AI systems align with ethical principles, such as fairness, accountability, and avoiding bias. This might involve rigorous testing, auditing, and ongoing monitoring of AI models to detect and mitigate any potential biases or unintended consequences.

User feedback and collaboration: Engaging with users and seeking their feedback can be key to building trust. Workday can involve their customers and end-users in the AI development process, gathering insights and acting on user concerns. Collaboration and open communication will help Workday enhance their AI systems based on real-world feedback and user needs.

Data privacy and security: Ensuring robust data privacy and security measures is vital for instilling trust in AI systems. Workday can prioritize data protection and encryption, complying with industry standards and regulations. By demonstrating strong data privacy practices, they can alleviate concerns associated with AI-driven data processing.

SOURCE Workday