Credit: Credit: Image: Mashable composite; Shutterstock, Jack Chadwick
AI at Work

We explore how AI is reshaping the landscape of work and how this transformation is already underway.


While we’re not quite in the era of sentient robot assistants, AI is rapidly positioning itself as your next colleague.

Over half of U.S. employees now leverage some form of AI in their daily tasks. According to a survey conducted by the Organisation for Economic Co-operation and Development (OECD) involving 5,000 workers, about 80% of AI users noted enhanced performance at work, attributed largely to increased automation. For some, integrating AI ethically remains a top concern in the workplace by 2024.

Although AI has the potential to foster more equitable workplaces through its technologies, and there might already be AI applications in your job, caution should be exercised before embracing AI across the board.

SEE ALSO:
The age of the AI-generated internet is already here

Some apprehension remains regarding job displacement and wage reduction as AI becomes more ingrained in the workforce. In a CNBC and SurveyMonkey survey of U.S. workers, 42% expressed concerns about AI’s impact on their roles, with higher levels of worry among lower-income individuals and workers from communities of color.

Moreover, with the surge in AI-related scams, ongoing debates on government regulations, privacy concerns, and the inundation of “new” AI releases, uncertainties persist regarding the future of AI.

It’s advisable to enter the realm of AI at work cautiously, armed with pertinent questions.

What types of AI are we discussing?

Begin by understanding artificial intelligence comprehensively. The term “Artificial Intelligence” now encompasses a range of technologies and services and is no longer a specific term.

As described by Mashable’s Cecily Mauran, artificial intelligence now refers to technology that automates or performs specific human-assigned tasks. What is commonly referred to as AI is more accurately termed generative AI or artificial general intelligence, capable of generating text, images, videos, audio, and code based on user prompts. However, this application has faced criticism for generating false information, spreading misinformation, and facilitating scams and deep fakes.

SEE ALSO:
A comprehensive AI glossary to help navigate our evolving world

Other AI forms include basic recommendation algorithms, complex neural networks, and broader machine learning techniques.

As Saira Meuller reports for Mashable, AI has already found its way into the workplace and daily life through various applications like Gmail’s predictive features, LinkedIn’s recommendation system, and Microsoft’s suite of Office tools. AI is also present in simpler tasks such as live transcripts during video meetings, voice assistants on personal devices, spell-check functions, and language translations.

Does your company have AI guidelines in place?

Once you ascertain that the AI tool extends beyond existing applications in your day-to-day responsibilities and may require additional oversight, it’s crucial to reach out to management as a precaution.

Companies ideally have established guidelines on the types of AI services permissible in the workplace and their appropriate usage, but a 2023 survey by The Conference Board revealed that three-quarters of companies lacked formal organizational AI policies. In the absence of rules, seek clarification from your manager and possibly involve legal or human resources departments, depending on the technology at hand.

Only utilize generative AI tools sanctioned by your organization.

In a global survey by business management platform Salesforce, 28% of workers acknowledged using generative AI tools, yet only 30% had received training on their proper and ethical use. Alarming statistics showed that 64% of 7,000 workers passed off generative AI work as their own.

To address unsupervised use issues, the survey recommended that employees stick to company-approved generative AI tools and platforms, refraining from using confidential company data or customer information in generative AI prompts.

Even major corporations like Apple and Google have banned generative AI usage previously.

Factors to contemplate before employing a generative AI tool:

  • Data privacy: Assess the sensitivity of information inputted into the generative AI tool, whether it involves proprietary or personal data, and whether the data is encrypted or secured.

  • Copyright concerns: If utilizing a generative AI system for creative purposes, confirm the legality of sourcing the artistic data needed to train the model.

  • Accuracy: Validate the information provided by the AI tool, watch out for false outputs, and determine the tool’s accuracy record.

SEE ALSO:
Key considerations for using ChatGPT in professional settings

Who benefits from the AI?

Understand where AI fits in your daily workflow and who interacts with the outputs from generative AI tools. Distinguish between incorporating AI tools like chatbots within your tasks and replacing entire job functions with AI. Consider how your use of AI impacts you, your clients, and if there are any associated risks. Disclosures surrounding AI usage remain unanswered, even within the legal sector, though a majority of Americans believe companies should disclose such information mandatorily.

Aspects to ponder:

  • Are you using AI primarily to brainstorm ideas or for decision-making processes?

  • Does the AI influence decision-making for you, your colleagues, or clients? Is it used for monitoring employees or evaluating performance?

  • Will the AI-generated content be shared outside the company? Should this information be disclosed, and if so, how?

Who oversees the AI?

With company approval and a solid understanding of the AI type you’re using, ethical considerations come into play. Many AI watchdogs warn that the haste to innovate in AI has concentrated power among a few Big Tech entities funding and controlling most AI development.

The AI policy and research institute AI Now highlights potential risks when major tech firms have their agendas and controversies. In an April 2023 report, the institute cited concerns about discrimination, privacy breaches, security vulnerabilities, environmental impacts, and the control Big Tech firms exercise over large-scale AI models. Furthermore, the institute criticized certain “open source” generative AI products that function more like black boxes, blocking users and developers from inspecting the AI’s algorithms.

Amidst a lack of federal regulations and unclear data privacy policies, concerns emerge about unchecked AI development. Following an executive order on AI by President Joe Biden, several software firms agreed to submit safety tests for federal oversight before launching products, part of a push to monitor foreign influence. However, standard regulatory guidelines are still in progress.

Consider your professional sector, your company’s affiliations, mission statements, and any conflicts of interest that may arise from using products developed by specific AI creators.

Areas for review:

  • Who created the AI?

  • Does the AI source from another company’s work or utilize an API like OpenAI’s Large Language Models (LLMs)?

  • Are there any conflicting interests between your company and the AI owner?

  • Are the AI developer’s privacy policies and data storage practices transparent concerning data used in generative AI tools?

  • Is the AI developer open to oversight?

Could the AI exhibit biases?

Even sophisticated AIs may mirror the biases inherent in their creators, algorithms, and data sources. AI Now’s April report highlights a tendency for intentional human supervision to reinforce rather than counteract these biases.

AI Now found that the notion of “meaningful” oversight lacks a clear definition, and evidence suggests people tend to exhibit automation bias, deferring to automated systems without scrutiny. In a piece for The Conversation, Casey Fielder, a technology ethics and education researcher, notes that many tech companies prioritize technological progress over the societal implications of AI utilization.

Fielder poses the idea of an “ethical debt” associated with AI solutions, expressing concerns about AI amplifying harmful biases and stereotypes, misleading users, labor exploitation, job displacement speed, and privacy violations, highlighting issues of concern beyond technical glitches.

Instances have occurred where AI applications in sectors like health insurance prompted legal and social consequences following allegations of patient mistreatment. In educational settings, AI misuse has led to plagiarism and biased disciplinary actions, affecting both students and educators professionally.

The ethical considerations of AI usage at work extend beyond low-stakes environments, and the implications of ethical debts must be acknowledged before integrating AI.

Topics: Artificial Intelligence, Social Good

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.

Chase DiBenedetto
Social Good Reporter

Chase joined Mashable’s Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she’s very funny.

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *