AI as the New Workplace Assistant—Promise, Limits, and Practical Realities
As organizations increasingly experiment with AI tools to answer employee questions, interpret policy documents, or guide internal procedures, it is tempting to see AI as a kind of universal workplace assistant: always available, endlessly patient, and capable of reducing administrative burden. But this week’s readings reminded me that using AI as a catch-all solution requires a far more grounded approach. Nemorin et al. (2023) highlight how AI is often surrounded by inflated promises, and this made me more cautious about positioning an AI assistant as a complete replacement for human judgment. Just the way that these AI bots take things so literally makes me think of the old Amelia Bedelia books!
If an internal AI tool provides incorrect information about procedures or compliance requirements, the consequences can be far more serious than a simple technology glitch. The authors also note that AI hype often conceals deeper issues related to privacy and surveillance, which pushed me to consider how internal search tools might inadvertently track or profile employees based on the questions they ask. This may inherently make the AI assistant bias as it collects information on the employee population and the types of questions they may be asking. Can you imagine an AI chatbot telling a high-performing employee they should just quit?
Similarly, Sofia et al. (2023) argue that AI is reshaping workforce expectations by creating constant demands for reskilling. This made me rethink the assumption that AI assistants automatically reduce workload; instead, employees need training to use these tools effectively and to understand their limitations, especially when the AI is interpreting policies or guiding procedural decisions. Their discussion on employee trust also resonated with me. Deploying AI internally is not just a technical decision, it is a cultural one. Employees are far more likely to rely on an AI assistant when the organization communicates clearly about how it works, what data it uses, and where human oversight still matters.
Touretzky et al. (2019) reinforce this human-centered approach by emphasizing the importance of AI literacy. Their argument that foundational AI understanding is essential made me realize that workplace AI assistants should not merely give answers but should support the development of employee judgment. When people understand how AI models process information, they become more discerning and less likely to accept outputs uncritically. The authors’ focus on ethical reasoning also shaped my thinking about internal AI tools. If an AI assistant is delivering guidance on workplace policies, the organization has a responsibility to ensure the system does so ethically, accurately, and in ways that support, not undermine, employee autonomy. Sometimes, this may expose initiatives in the organization such as a RIF (reduction in Force) inadvertently since AI tools don’t understand how to execute or properly incorporate the concept of timing in employee matters.
Overall, these readings helped me see AI assistants not as a replacement for employee work, but as a carefully governed support tool that requires human literacy, ethical design, and transparent communication. As I read my classmates’ reflections later this week, I’m curious how others are considering the balance between efficiency and responsibility in AI integration, and what they believe organizations owe employees when deploying such tools.
Applying Activity Theory to Transform Learning Impact
Reference:
Marroquín, E. M. (2025). Activity theory as framework for analysis of workplace learning in the context of technological change. Learning and Teaching: The International Journal of Higher Education in the Social Sciences, Elsevier.
https://doi.org/10.1016/j.later.2025.1000083
Annotation:
The rise of AI has happened faster than businesses and experts can adapt to the changes it has inevitably caused. Marroquín (2025) explores how Activity Theory can serve as a powerful framework for understanding how workplace learning evolves within technologically mediated environments. The author argues that as artificial intelligence and automation transform job functions, learning must be viewed not as a discrete event but as an integral part of the work activity system (comprising tools, rules, roles, community, and the object of work).
Rather than focusing on isolated training sessions, the study suggests that learning occurs through the contradictions and adaptations that arise as employees interact with new tools and changing structures. By examining these tensions, the article highlights how organizational learning can drive systemic transformation and measurable performance outcomes making this incredibly relevant to the field of organizational development.
Marroquin’s use of Activity Theory offers a rich, systems-level analysis that transcends traditional learning frameworks focused on individual cognition. The methodology draws on the framework’s core elements such as mediation, contradictions, and expansive learning which provides a structured yet flexible lens to analyze real-world complexity in workplace settings.
The strength of this article lies in its integration of theory and practice: it effectively links conceptual depth with practical implications for managing learning in AI-enabled environments. At Allegiant Professional Resources, our learning and development initiatives echo Marroquin’s perspective: learning is only valuable if it changes work outcomes. We’ve moved away from counting inputs such as “2 hours of training completed” or “5,000 skills tagged” and instead focus on impact measures, such as reduced error rates, faster cycle times, or improved decision accuracy after interventions.
Activity Theory helps us trace how those results occur by analyzing the full activity system like what tools employees use, which rules or norms guide their work, how their roles interact, and what the shared object of their activity is. When contradictions emerge (for example, when a new AI dashboard changes reporting workflows), we view them as learning opportunities rather than inefficiencies. Marroquín’s work reinforces our philosophy that training is not the outcome but instead - performance improvement is. It provides a theoretical foundation for measuring not activity, but transformation within the work system, a principle that continues to shape Allegiant’s evidence-based approach to organizational learning and impact measurement.
Try using AI Personalized Podcasts to Drive Retention & Employee Development
Reference:
Do, T. D., Bin Shafqat, U., Ling, E., & Sarda, N. (2024). PAIGE: Examining learning outcomes and experiences with personalized AI-generated educational podcasts (arXiv preprint arXiv:2409.04645). https://doi.org/10.48550/arXiv.2409.04645
Annotation:
The researcher take a deep dive into how generative AI can convert textbook chapters into personalized educational podcasts for a group of 180 college students. The researchers compared traditional textbook reading with both generalized and personalized AI-generated podcasts across multiple subject areas. Their findings showed that students overwhelmingly preferred podcasts to reading, and that personalized podcasts tailored to learners’ backgrounds and interests improved comprehension in several disciplines.
The takeaway is clear: AI-driven, personalized audio content can enhance learning engagement and outcomes when designed with relevance and learner context in mind.
The study’s methodology, integrating AI-driven podcast generation with validated user experience measures, models exactly the kind of data-informed experimentation L&D professionals can use to evaluate their own digital learning tools. It also underscores the importance of delivery design, such as the conversational tone, pacing, and modality that can have a deep influence in learner motivation. Consultants working with clients on upskilling strategies can take from this that AI isn’t just a content generator; it’s an adaptive facilitator that can align learning experiences to individual needs and organizational culture.
At Allegiant, our consulting work centers on helping organizations create inclusive learning environments that make workplace learning more effective for all employees, particularly those whose neurodivergence offers unique cognitive strengths. Studies like this one inform how we think about designing micro-learning and leadership development content that doesn’t just “teach,” but connects meaningfully with how diverse minds engage with information.
We also see a connection between this research and how business leaders who host industry podcasts can influence engagement and retention. A 2023 LinkedIn Workplace Learning Report found that employees who feel connected to their organization’s thought leadership (through podcasts or leadership-led storytelling) are 33% more likely to stay with the company. Integrating AI-generated podcasts or internal learning channels can give employees that same sense of inclusion and relevance.
As our research and consulting practice evolves, we’re exploring how personalization, audio learning, and neurodivergent engagement strategies can converge to make corporate learning both equitable and deeply human.