• Data privacy
    美国劳工部发布职场人工智能使用原则,保护员工权益(附录原文) 今天5月16日,美国劳工部发布了一套针对人工智能(AI)在职场使用的原则,旨在为雇主提供指导,确保人工智能技术的开发和使用以员工为核心,提升所有员工的工作质量和生活质量。代理劳工部长朱莉·苏在声明中指出:“员工必须是我们国家AI技术发展和使用方法的核心。这些原则反映了拜登-哈里斯政府的信念,人工智能不仅要遵守现有法律,还要提升所有员工的工作和生活质量。” 根据劳工部发布的内容,这些AI原则包括: 以员工赋权为中心:员工及其代表,特别是来自弱势群体的代表,应被告知并有真正的发言权参与AI系统的设计、开发、测试、培训、使用和监督。这确保了AI技术在整个生命周期中考虑到员工的需求和反馈。 道德开发AI:AI系统应以保护员工为目标设计、开发和培训。这意味着在开发AI时,需要优先考虑员工的安全、健康和福祉,防止技术对员工造成不利影响。 建立AI治理和人工监督:组织应有明确的治理体系、程序、人工监督和评估流程,确保AI系统在职场中的使用符合伦理规范,并有适当的监督机制来防止误用。 确保AI使用的透明度:雇主应对员工和求职者透明地展示其使用的AI系统。这包括向员工说明AI系统的功能、目的以及其在工作中的具体应用,增强员工的信任感。 保护劳动和就业权利:AI系统不应违反或破坏员工的组织权、健康和安全权、工资和工时权以及反歧视和反报复保护。这确保了员工在AI技术的应用下,其基本劳动权益不受侵害。 使用AI来支持员工:AI系统应协助、补充和支持员工,并改善工作质量。这意味着AI应被用来提升员工的工作效率和舒适度,而不是取代员工或增加其工作负担。 支持受AI影响的员工:雇主应在与AI相关的工作转换期间支持或提升员工的技能。这包括提供培训和职业发展机会,帮助员工适应新的工作环境和技术要求。 确保负责任地使用员工数据:AI系统收集、使用或创建的员工数据应限于合法商业目的,并被负责地保护和处理。这确保了员工数据的隐私和安全,防止数据滥用。 这些原则是根据拜登总统发布的《安全、可靠和可信赖的人工智能开发和使用行政命令》制定的,旨在为开发者和雇主提供路线图,确保员工在AI技术带来的新机遇中受益,同时避免潜在的危害。 拜登政府强调,这些原则不仅适用于特定行业,而是应在各个领域广泛应用。原则不是详尽的列表,而是一个指导框架,供企业根据自身情况进行定制,并在员工参与下实施最佳实践。通过这种方式,拜登政府希望能在确保AI技术推动创新和机会的同时,保护员工的权益,避免技术可能带来的负面影响。 这套原则发布后,您认为它会对贵公司的AI技术使用和员工权益保护产生怎样的影响? 英文如下: Department of Labor's Artificial Intelligence and Worker Well-being: Principles for Developers and Employers Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have moved with urgency to harness AI's potential to spur innovation, advance opportunity, and transform the nature of many jobs and industries, while also protecting workers from the risk that they might not share in these gains. As part of this commitment, the AI Executive Order directed the Department of Labor to create Principles for Developers and Employers when using AI in the workplace. These Principles will create a roadmap for developers and employers on how to harness AI technologies for their businesses while ensuring workers benefit from new opportunities created by AI and are protected from its potential harms. The precise scope and nature of how AI will change the workplace remains uncertain. AI can positively augment work by replacing and automating repetitive tasks or assisting with routine decisions, which may reduce the burden on workers and allow them to better perform other responsibilities. Consequently, the introduction of AI-augmented work will create demand for workers to gain new skills and training to learn how to use AI in their day-to-day work. AI will also continue creating new jobs, including those focused on the development, deployment, and human oversight of AI. But AI-augmented work also poses risks if workers no longer have autonomy and direction over their work or their job quality declines. The risks of AI for workers are greater if it undermines workers' rights, embeds bias and discrimination in decision-making processes, or makes consequential workplace decisions without transparency, human oversight and review. There are also risks that workers will be displaced entirely from their jobs by AI. In recent years, unions and employers have come together to collectively bargain new agreements setting sensible, worker-protective guardrails around the use of AI and automated systems in the workplace. In order to provide AI developers and employers across the country with a shared set of guidelines, the Department of Labor developed "Artificial Intelligence and Worker Well-being: Principles for Developers and Employers" as directed by President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, with input from workers, unions, researchers, academics, employers, and developers, among others, and through public listening sessions. APPLYING THE PRINCIPLES The following Principles apply to the development and deployment of AI systems in the workplace, and should be considered during the whole lifecycle of AI – from design to development, testing, training, deployment and use, oversight, and auditing. The Principles are applicable to all sectors and intended to be mutually reinforcing, though not all Principles will apply to the same extent in every industry or workplace. The Principles are not intended to be an exhaustive list but instead a guiding framework for businesses. AI developers and employers should review and customize the best practices based on their own context and with input from workers. The Department's AI Principles for Developers and Employers include: [North Star] Centering Worker Empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace. Ethically Developing AI: AI systems should be designed, developed, and trained in a way that protects workers. Establishing AI Governance and Human Oversight: Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace. Ensuring Transparency in AI Use: Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace. Protecting Labor and Employment Rights: AI systems should not violate or undermine workers' right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections. Using AI to Enable Workers: AI systems should assist, complement, and enable workers, and improve job quality. Supporting Workers Impacted by AI: Employers should support or upskill workers during job transitions related to AI. Ensuring Responsible Use of Worker Data: Workers' data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.
    Data privacy
    2024年05月16日
  • Data privacy
    The 10 golden rules for establishing a people analytics practice 十大黄金法则: 战略适配性:确保人力资源分析项目与组织的战略目标对齐,以实现最大的价值和影响。 持续的员工倾听:通过整合员工和业务的声音,优先处理正确的战略人力资源问题。 证据基础的HR服务整合:将所有基于证据的HR服务整合到一个功能中,提升人力资源分析的交付速度和质量。 清晰的人力资源分析操作模型:建立一个目标操作模型,明确客户、可交付服务、服务水平和交付时间。 数据隐私合规性:遵守数据隐私法规,同时考虑数据分析在文化和业务连续性方面的影响。 数据驱动决策的HR能力提升:通过提升HR社区的数据和洞察力使用,将业务机会转化为分析服务。 管理HR数据:建立集中的企业级数据基础设施,改善数据的组合、共享和分析能力。 产品设计和思维:确保人力资源分析服务的用户设计友好,易于导航,并激励用户在决策中使用数据。 实验与最小可行产品:通过实验和最小可行产品,逐步评估和改进解决方案,避免大规模实施失败。 利用人工智能的潜力:构建和实施基于机器学习的AI功能,确保模型的性能和有效性,同时控制数据偏见和合法性。 这些法则展示了通过系统方法创建并采纳人力资源分析实践的重要性,强调了以数据和证据为基础支持人力资源功能的必要性。 It is time for an update on my previous posts on the 10 golden rules of people analytics, simply because so much has happened since then. For example, continuous employee listening, artificial intelligence (AI in HR), agile HR, employee experience, strategic workforce management, and hybrid working are just a few emerging topics in recent years listed in Gartner's hype cycle for HR transformation (2023). In the last year, I have spoken to many people working in different organisations on establishing people analytics as an accepted practice. I have also joined some great conferences (HRcoreLAB, PAW London & Amsterdam) where I learned from excellent speakers. I also (re)engaged with some excellent people analytics and workforce management vendors, such as Crunchr, Visier, eQ8, AIHR, One Model, Mindthriven, and Agentnoon. Finally, I also enjoyed having multiple elevating discussions with some thought leaders who influenced my thinking (e.g., David Green, Rob Briner, Jonathan Ferrar, Dave Millner, Sjoerd van den Heuvel, Ian O'Keefe, Brydie Lear, Jaap Veldkamp, RJ Milnor, and Nick Kennedy). These encounters and my ongoing PhD research on adopting people analytics resulted in a treasure trove of new ideas and knowledge that confirmed my experience and beliefs that it is all about creating an embraced people analytics practice using a systemic approach in supporting HR in becoming more evidence-based. So, like I said, it's time for an update. I hope you enjoy and appreciate the post, and I invite you to engage and react in the comments or send me a direct message. Create a strong strategy FIT. It is obvious but not a common practice that your people analytics portfolio needs to align or fit with your strategic organisational goals. A strong strategic FIT ensures you execute people analytics projects with the most value and impact on your organisation. It is, therefore, important to integrate the decision-making on where to play in people analytics with your periodic HR prioritisation process. Strategic workforce management and continuous employee listening are pivotal in prioritising the right strategic workforce issues The bigger picture is that two people analytics-related HR interventions, strategic workforce management and continuous employee listening, are pivotal in prioritising the right strategic workforce issues. By blending the insights from these HR interventions, you ensure you are prioritising based on the voice of the business and the voice of the employee. See also my previous post on strategic workforce management. Because people analytics is at the core of these HR interventions and provides many additional strategic insights, I argue we need a new HR operating model where the people analytics practice is positioned at the centre of HR. I argue that we need a new HR operating model where the people analytics practice is positioned at the centre of HR Grow and integrate evidence-based HR services. Based on my experience and research, I strongly advise integrating all evidence-based HR services into one function. See also my previous post on establishing a people analytics practice. This integration will enhance the speed and quality of your people analytics delivery, make you a trusted analytical strategic advisor, and make you a more attractive employer for top people analytics talent. All other people analytics function setups seem like compromises. With evidence-based HR services, I refer to activities such as reporting, advanced analytics, survey management, continuous employee listening, organisational design and strategic workforce management. It is hardly ever that a strategic question is answered by only one of these services. In most cases, you will need to combine survey management (i.e., collecting new data), perform advanced analytics (i.e., build a predictive model), and share the outcomes in a dashboard (i.e., reporting) or build new system functionality based on the models (e.g., vacancy recommendation). You will need to combine various people analytics services to provide real strategic value Create a clear people analytics operating model. Because the people analytics practice is maturing, it deserves a clear target operating model. In a target operating model, you clarify to the organisation whom you consider your clients, what services or solutions you can deliver, what service levels your clients can expect, and when and how you will deliver the solution. Being transparent about your target operating model will build trust and legitimacy in your organisation. Inspired by the work of Insight222, a people analytics target operating model consists of a demand engine (understanding and prioritising demand), a solution engine (e.g., data management, building models, designing surveys), and a delivery engine (e.g., dashboards, advisory with story-telling, bringing models to production), ideally covering all the evidence-based HR services mentioned under rule 2 in this post. Additionally, more practices are applying agile principles to increase time-to-delivery and are using some form of release management to balance capacity. Built trust and legitimacy Compliance with data privacy regulations has been an important topic since the early days of people analytics ten years ago. Even before the GDPR era, organisations did well to understand when personal data could be collected, used, or shared. Legislation such as GDPR offers guidance and more structure to organisations on how to deal with data privacy issues. Being fully compliant is not where responsible data handling ends However, being fully compliant is not where responsible data handling ends. Simply because you can, according to data privacy regulations, doesn't mean you should. There are also contextual and ethical elements to take into account. For example, being able and regulatory-wise allowed to build an internal sourcing model matching internal employees with specific skills with internal vacancies doesn't mean you should. From a cultural or business continuity perspective, creating internal mobility may not be beneficial or desired in specific areas of your organisation. Assessing the implications of using data analytics in a broader context than just regulations will also enhance the needed trust and legitimacy. Upskill HR in data-driven decision-making Having a mature people analytics practice that delivers high-quality, evidence-based HR services is not enough to ensure value creation for your organisation. Suppose your organisation, including your HR community, struggles to translate business opportunities into analytical services or finds it hard to use data and insights on a daily basis in their decision-making. In that case, upskilling is a necessary intervention. HR upskilling in data-driven decision-making is a necessity in growing towards a truly evidence-based HR culture Creating awareness of the various analytical opportunities, developing critical thinking, creating an inquisitive mindset, identifying success metrics for HR interventions and policies, evaluating these metrics, and understanding the power of innovative data services, such as generative AI, is essential. When upskilling, be sure to recognise the different HR roles and their needs and preferences. For example, your HR business partners will likely want to develop their skills in identifying strategic workforce metrics and strategic workforce management. However, your COE lead (i.e., HR domain leads) wants to develop their ability to collect and understand internal clients' feedback and improve their HR services (e.g., recruitment, learning programs, leadership development). So, diversify your learning approach to make it more effective. Manage your HR data There is enormous value in integrating your HR and business data in a structured matter. Integrated enterprise-wide data allows you to combine, improve, share, and analyse data more efficiently. More organisations are using data warehouse and data lake principles to create this central enterprise-wide data infrastructure based on, for example, Microsoft Azure or Amazon Web Services technology. A mature people analytics team is best equipped to create an HR data strategy and manage the corresponding data pipeline. HR would do well to improve its capability to manage the data pipeline by hiring data engineers. It is an interesting discussion about where to position this data management capability and related skill set. The first thought is to position this capability close to the HR systems and infrastructure function. This setup might work perfectly. However, based on your HR context and maturity, I argue that the people analytics practice is a good and sometimes better alternative. Mature people analytics teams are likely more able to think about data management and creating data products and services built with machine learning models. Traditional HR systems and infrastructure teams may tend to focus too much on the efficiency of the HR infrastructure (e.g., straight-through processing, rationalising the HR tech landscape). Excel in product design and thinking Successful people analytics or evidence-based HR services excel in product design. Whether built with PowerBI or vendor-led BI platforms (e.g., Crunchr, Visier, One Model), dashboards must be user-friendly, easy to navigate, and motivate users to work with data in their decision-making. The same applies to functionality based on machine learning models, such as chatbots, learning assistants, or vacancy recommendations. The user design, the functionality provided, and the flawless and timely delivery all contribute to maximising the usage of these analytical services and, ultimately, decision-making. Strong product design and thinking requires product owners to have a marketing mindset As important as the product design is product thinking by the product owner. A product owner for, e.g., recruitment or leadership programs, should be constantly interested in hearing what internal clients think about their products. This behaviour requires product owners to have a marketing mindset. As part of a larger continuous listening program, an internal client feedback mechanism should provide the necessary information to improve your products and services continuously. A product owner should be curious about questions like: Are your internal clients satisfied? Should we tailor the products for different user types? What functionality can we improve or add? Allow yourself to experiment When a solution looks good and makes sense based on your analytics, management tends to go for an immediate big-bang implementation. However, don't be afraid to experiment and learn before rolling out your solution to all possible users. Starting with a minimum viable product (i.e., MVP) allows you to evaluate your product among a select group of users early in the development process. Based on feedback, you can enhance your product incrementally (i.e., agile) manner. It also enables you, when valuable, to compare treatment groups with non-treatment groups. These types of experiments (i.e., difference-in-difference comparisons) help you to evaluate the effect the new product intends to have. People analytics services can support this incremental approach, testing a minimal viable product (MVP) and obtaining feedback to provide additional insights that may avoid a big implementation failure of your new products. Embrace the potential of AI in HR Today, artificial intelligence (AI) is predominantly based on machine learning (ML). These AI-ML models provide powerful functionality such as vacancy and learning recommendations, chatbots, and virtual career or work schedule assistants. There is no need to fear these applications, but having a deeper understanding of them is necessary. However, implementing these types of functionality without checking and validating them is risky and, therefore, unwise. A mature people analytics practice allows you to build your own machine-learning-based AI functionality A mature people analytics practice allows you to create and build these AI functionalities internally. You can also buy AI functionality by implementing a vendor tool, but please ensure you do not end up with a new vendor for each AI functionality you desire. If you choose to buy AI functionality, the people analytics team should act as a gatekeeper. Internally built machine learning models are subject to checks and balances. And rightfully so. However, the same should apply to ML-based AI functionality from external providers. The people analytics team should check the performance and validity of the model and control for biases in the data and legal and ethical justification. The people analytics leader can make the difference If you are the people analytics leader within your organisation, it might be daunting or reassuring to hear that you can make the difference between failure and success. You bring the people analytics practice alive by reaching out to stakeholders, developing your team, understanding your clients, learning from external experts, and building a road map to analytical maturity. A successful people analytics practice starts with the right people analytics leader As a people analytics leader, you should excel in business acumen, influencing skills, strategic thinking, critical and analytical thinking, understanding the HR system landscape, understanding the possibilities of analytical services, project management, and, last but not least, people management (as all leaders should). The result of having all these capabilities is that a people analytics leader, together with the people analytics team, becomes a trusted advisor to senior management, understands the most pressing issues within an organisation, can effectively manage the HR data pipeline, and can build new analytical services to enhance decision-making and ultimately drive organisational performance and employee well-being. I hope you enjoyed my update on the 10 golden rules for establishing people analytics practice. If you enjoyed the post, please hit ? or feel invited to engage and react in the comments. Send me a direct message if you want to schedule a virtual meeting to exchange thoughts one-on-one. Thanks to Jaap Veldkamp for reviewing. 作者 :Patrick Coolen https://www.linkedin.com/pulse/10-golden-rules-establishing-people-analytics-practice-patrick-coolen-85use/
    Data privacy
    2024年04月15日
  • Data privacy
    Workday: It’s Time to Close the AI Trust Gap Workday, a leading provider of enterprise cloud applications for finance and human resources, has pressed a global study recently recognizing the  importance of addressing the AI trust gap. They believe that trust is a critical factor when it comes to implementing artificial intelligence (AI) systems, especially in areas such as workforce management and human resources. Research results are as follows: At the leadership level, only 62% welcome AI, and only 62% are confident their organization will ensure AI is implemented in a responsible and trustworthy way. At the employee level, these figures drop even lower to 52% and 55%, respectively. 70% of leaders say AI should be developed in a way that easily allows for human review and intervention. Yet 42% of employees believe their company does not have a clear understanding of which systems should be fully automated and which require human intervention. 1 in 4 employees (23%) are not confident that their organization will put employee interests above its own when implementing AI. (compared to 21% of leaders) 1 in 4 employees (23%) are not confident that their organization will prioritize innovating with care for people over innovating with speed. (compared to 17% of leaders) 1 in 4 employees (23%) are not confident that their organization will ensure AI is implemented in a responsible and trustworthy way. (compared to 17% of leaders) “We know how these technologies can benefit economic opportunities for people—that’s our business. But people won’t use technologies they don’t trust. Skills are the way forward, and not only skills, but skills backed by a thoughtful, ethical, responsible implementation of AI that has regulatory safeguards that help facilitate trust.” said Chandler C. Morse, VP, Public Policy, Workday. Workday’s study focuses on various key areas: Section 1: Perspectives align on AI’s potential and responsible use. “At the outset of our research, we hypothesized that there would be a general alignment between business leaders and employees regarding their overall enthusiasm for AI. Encouragingly, this has proven true: leaders and employees are aligned in several areas, including AI’s potential for business transformation, as well as efforts to reduce risk and ensure trustworthy AI.” Both leaders and employees believe in and hope for a transformation scenario* with AI. Both groups agree AI implementation should prioritize human control. Both groups cite regulation and frameworks as most important for trustworthy AI. Section 2: When it comes to the development of AI, the trust gap between leaders and employees diverges even more. “While most leaders and employees agree on the value of AI and the need for its careful implementation, the existing trust gap becomes even more pronounced when it comes to developing AI in a way that facilitates human review and intervention.” Employees aren’t confident their company takes a people-first approach. At all levels, there’s the worry that human welfare isn’t a leadership priority. Section 3: Data on AI governance and use is not readily visible to employees. “While employees are calling for regulation and ethical frameworks to ensure that AI is trustworthy, there is a lack of awareness across all levels of the workforce when it comes to collaborating on AI regulation and sharing responsible AI guidelines.” Closing remarks: How Workday is closing the AI trust gap. Transparency: Workday can prioritize transparency in their AI systems. Providing clear explanations of how AI algorithms make decisions can help build trust among users. By revealing the factors, data, and processes that contribute to AI-driven outcomes, Workday can ensure transparency in their AI applications. Explainability: Workday can work towards making their AI systems more explainable. This means enabling users to understand the reasoning behind AI-generated recommendations or decisions. Employing techniques like interpretable machine learning can help users comprehend the logic and factors influencing the AI-driven outcomes. Ethical considerations: Working on ethical frameworks and guidelines for AI use can play a crucial role in closing the trust gap. Workday can ensure that their AI systems align with ethical principles, such as fairness, accountability, and avoiding bias. This might involve rigorous testing, auditing, and ongoing monitoring of AI models to detect and mitigate any potential biases or unintended consequences. User feedback and collaboration: Engaging with users and seeking their feedback can be key to building trust. Workday can involve their customers and end-users in the AI development process, gathering insights and acting on user concerns. Collaboration and open communication will help Workday enhance their AI systems based on real-world feedback and user needs. Data privacy and security: Ensuring robust data privacy and security measures is vital for instilling trust in AI systems. Workday can prioritize data protection and encryption, complying with industry standards and regulations. By demonstrating strong data privacy practices, they can alleviate concerns associated with AI-driven data processing. SOURCE Workday
    Data privacy
    2024年01月11日