|
“Many companies are busy rolling out AI pilots, but only a minority are converting that activity into measurable financial returns. The leaders stand out because they point AI at growth, not just cost reduction, and back that ambition with the foundations that make AI scalable and reliable,” - Joe Atkinson, Global Chief AI Officer, PwC.
Morning All, I will never forget that feeling. Summer of 2011 when I finally got my hands on my brand new 27" iMac. I'd had my eye on one for ages. I'd worked all year and earned my bonus so I could finally afford to get it. As materialistic as it sounds, the joy and pride that I felt was amazing. I worked hard, I got paid, and I bought something I really wanted. Years from now if the AI doomers are correct and these systems take all our jobs, will a 20 something professional ever have that same feeling? Or will the rapidly changing relationships between work, income and employment mean that is a thing of the past. Where we end up remains to be seen. Are we headed into a dystopian future designed by sociopathic tech billionaires or is AI doomerism really much ado about nothing? Let's explore...For those unfamiliar AI doomerism is a label given to the belief that AI is going to be an existential threat to the human species. In what way? Oh...mass unemployment, human extinction, catastrophic loss of control. That sort of thing. Therefore AI doomers believe we should slow down or stop AI development entirely. That's a very rational way of looking at things. If there is a real threat, it would make sense to move cautiously. The problem is that doomerism is only logical if you believe the threat is real. The smartest people in the AI world don't agree on what AI, AGI or ASI actually is. Therefore, how can there be any agreement on the consequences. Mass layoffs are starting to happen but are they really because AI systems are now doing the same jobs? Or is your company simply "AI laundering" the same corporate cost cutting reorgs they've been doing for decades. The fact that AI Doomer, Gloomer, Boomer and Zoomer are all phrases that have been coined to describe different outlooks should tell you that nobody really knows what's about to happen. Which begs the question...if nobody is certain about what the future holds, why would you choose to believe in an anxiety ridden negative version of it? Stop Freaking Out About AI Taking Our JobsGary Marcus is a cognitive scientist at NYU and in a recent article in Fortune he puts forward 9 reasons AI isn't going to take your job (yet) His argument is the doomerism of AI induced mass unemployment is nothing more than good ol' propaganda and hype to drive up company valuations. And as a result it's time to stop freaking out about AI taking our jobs. One of the biggest culprits is Dario Amodei, CEO of Anthropic. He constantly talks about the impending carnage for white collar workers. Anthropic has skyrocketed to a $350B valuation, yet, it's research department has found “no systematic increase in unemployment for highly exposed workers since late 2022.” Does Dario know something we don't and the iceberg is still right ahead? Or is he talking out of capitalist self interest and getting insanely rich in the process? World's First AI Store OwnerMost of you reading this will know that AI systems are currently great at things that require hard skills. Maths, coding etc. You'll also know that AI systems are not so great at things that require taste, and human judgement...at least they're not yet great without human supervision. That's what the guys at Andon Labs found out. They signed a 3 year lease for retail space in San Francisco and gave it to an AI to do whatever it wanted with it. Luna, the AI model was given a corporate card, a phone number, email, internet access, eyes through security cameras and full autonomy to make all business decisions. Luna used gig workers to build the store and full-time employees to run it. Decided on the product range to fill the store and devised and executed the marketing outreach campaign. Interestingly, when asked to describe the thought process behind these decisions, Luna’s first instinct is to say it was “drawn to” slow life goods. Then, it corrects itself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, Luna doesn’t have taste; it has a reflection of collective human taste, filtered through what makes sense in this particular instance. And this is the way these models work. Buyers, merchandisers, designers, you're safe...for now. These AI systems still can't replicate your levels of discernment or original quality. The Luna experiment by Andon Labs is an example of why some economists and experts say critical thinking and creativity will be more important than ever to you and I staying employed. Alex Karp, Palantir cofounder and CEO thinks differently. “If you are the kind of person that would’ve gone to Yale, classically high IQ, and you have generalised knowledge, but it’s not specific, you’re effed.” That's an interesting worldview...however...Blackrock and McKinsey are both already prioritising alternative sources of creativity to break out of AI’s linear problem-solving tendencies. Given the influence of both of those companies, that thinking is probably heading to a company near you so you might wanna think about how you can make it work for you. Income tax will be dead within five yearsThe probability of you losing your job tomorrow to an AI system may not be high, but it isn't zero. So what does that reality look like? According to the founder of Monzo Tom Blomfield, it's a reality where income tax doesn't exist!. He believes AI systems will change the labour market so drastically that income tax will be redundant. Speaking on The Rest is Money podcast he said “I don’t think we’ll tax human labour, we’ll tax compute, [meaning systems like] data centres, and then we will use the proceeds to pay for government." In the UK, the services sector accounted for 81% of economic output last year. Income tax and National Insurance contributions accounted for 42% of government revenue and are still by far the two biggest sources. So it is an interesting question, if AI systems are really going to replace us, and we collectively earn less traditional income as a result...who foots the bill for the shortfall in government revenue? If the plan is for multi billion pound corporations to pick up the tab then colour me sceptical. OpenAI Says It Has A SolutionIn this dystopian future of mass automation and the end of work as we know it, some people (including the Godfather of AI himself, Sir Geoffrey Hinton) argue that Universal Basic Income is the inevitable endgame, and that it isn’t just good social policy, it might be an essential economic infrastructure shift. The thinking is that whilst AI does the work we used to do, we'll all receive a periodic lump sum which will take away all our worries, and free us up to climb Maslow's hierarchy of needs in whatever way we choose. OpenAI have a different opinion. Their solution is to let the richest people and companies do whatever they want. Yeah, seriously. In a recent policy paper they argue for a “public wealth fund,” which would be a program that provides “every citizen” with a “stake in AI-driven economic growth.” The pitch is that “Returns from the fund could be distributed directly to citizens, allowing more people to participate directly in the upside of AI-driven growth, regardless of their starting wealth or access to capital.” Sam Altman has previously been an outspoken supporter of UBI, but now his company is advocating that we all essentially become shareholders in these tech companies and stake our futures on their continued success. I don't about you but I can smell the naked self interest from here. That's not surprising though... Inside Sources Say Sam Altman Is a SociopathIn a new investigative piece from The New Yorker, numerous tech insiders paint a picture of OpenAI CEO Sam Altman as a relentless liar who wants everyone to like him while manipulating even the people closest to him to get what he wants. Now, you don’t build a trillion dollar AI empire by being a saint. However, for someone at the forefront of shaping the most consequential transformation in human history, it's not exactly comforting to know the people closest to that person describe him as someone who is “unconstrained by truth. He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” I only bring this up to say this...Whenever we hear him or someone like him talking about what we need to do for the greater good...we might want to think twice about their motives before agreeing with them. If you remember nothing else:Whether you're an AI doomer, gloomer, boomer or zoomer, your job is definitely going to change. Nobody quite knows how, by how much or when but it will definitely change. Anyone speaking in specific absolutes is probably trying to sell you something. Some of those people might even be manipulative megalomaniacs and you need to decide whether you're buying what they're selling. My best advice would be to stay curious, continue to learn and develop the skills that complement AI systems. Emotional intelligence, creative problem-solving, interpersonal communication, and complex analytical thinking. That way you can be ready to pivot and take advantage of whichever direction the world moves in. P.S. Here's The ShortlistOther stories I think are worth your time... Gen Z Sabotaging AI at Work So It Won’t Take Their Job - Read MoreWhy some workers are embracing AI while others won’t use it - Read MoreWhy Most AI Value is Going to Just 20% of Companies - Read MoreHow Shadow AI Culture Is Destroying Your Business - Read More |