The impact of the introduction of disruptive technology on workers usually depends on whether technology replaces or augments labour. The current artificial intelligence (AI) revolution elicits predictions ranging from mass job losses to enhanced economic productivity that creates more jobs.
I, for one, can’t say for sure at this moment whether we should panic about automation technologies replacing human labour. It is not that I don’t think there aren’t risks and there won’t be job losses. To be sure, there will be displaced workers, and within the current economics, inequality will likely increase.
However, we also do not know enough about the future to say exactly what will happen to the labour market 10, or even five, years from now. What we currently know is that past technological breakthroughs have demonstrated scale effects where automation reduces costs and increases product and service demand, thus increasing labour demand.
Additionally, we should not look at work as a set quantity. In economics, the lump of labour fallacy refers to the mistaken belief that there is only a fixed amount of work to be done in an economy. However, in real life, an economy not only grows over time but also becomes more diversified.
In a modern job, a worker will likely have several different tasks in an occupation. Even though automation may eliminate some tasks done by humans, the workforce can typically shift to other types of tasks while retaining occupation.
Even with these mitigating factors, it remains crucial to create and sustain decent and high-quality jobs in the economy while ensuring that the workforce is properly trained for them.
The corollary to training and education is the approach to knowledge acquisition and application. This is where we run the risk of LinkedIn-isation of knowledge, where summary and definition sharing can be taken as signs of subject expertise. This is by no means a Malaysian LinkedIn problem; similar problems are likely widespread in higher education, where students mistake definitions and descriptions for constructions of arguments and solutions.
What AI excels at is scanning vast amounts of information and summarising it from multiple sources into an easily digestible format. The output is always grammatically flawless and polished, the kind of sentences you hear presented at conferences. This is very useful, and personally, I use AI to check references and sentence structure.
However, that doesn’t mean you should skip learning grammar in school, nor does AI replace the essential skills of thesis development, argument structuring, and idea generation.
Over the past few months, the number of environment, social and governance (ESG) experts in my LinkedIn feed has grown from about 20 to what feels like 100. Almost all of their knowledge-sharing is grammatically perfect and polished. I know some of my connections on the platform are actual field practitioners, but most of the sharing is unmistakably AI-generated.
I’m not hating here. There’s nothing wrong with sharing interesting AI summaries if you’re trying to make a name in the education space, but for most people, it can’t be the only thing.
Additionally, its utility is diminishing; soon, anyone should be able to prompt for information, if they can’t already. It certainly can’t serve as the foundation for claiming expertise or projecting oneself as an expert. The internet is already awash with know-it-alls, or as we call them here, ‘palataos’.
Transitioning from simply sharing AI-generated content to producing AI-augmented work is the challenge Malaysia faces in stepping up to thrive in the digital economy. It underscores the difference between being primarily users of technology and becoming innovators.
Today, in Malaysia, we need fewer people standing on the sidelines, waiting to say “I told you so”, or sharing definitions, and more people taking action, stepping up, and embracing risk.