The gig economy has proven hugely disruptive, transforming how we consume goods and services. From job posting sites to ride sharing platforms and food delivery, it’s proven a run-away success, contributing over £20bn a year to the UK economy according to the Office for National Statistics. It’s defined as the exchange of labour for money via digital platforms that actively match providers with customers on a short-term and payment by task basis but it’s very reliance upon that digital infrastructure also makes it highly susceptible to attack or abuse.
Gig platforms rely upon Application Programming Interfaces (APIs) to provide real-time services, process payments, and connect the ecosystem players and it’s those APIs that present the weak link in the chain. API attacks are constantly evolving, as evidenced in the OWASP Top 10 API Security Risks which was recently updated just over a year ago, and Generative AI (GenAI) is now being used to craft more sophisticated attacks. These include AI-powered scraping, account takeovers (ATO), and even fake interactions.
As gig economy businesses grow, bots enable attackers to exploit these vulnerabilities at scale by carrying out AI-automated scraping or volumetric attacks. Traditional application defence solutions, which were never designed for the modern complexities of APIs or the intelligent tactics of GenAI, then struggle to detect and mitigate these threats.
The shortcomings of application security
Application security typically relies on embedding code into end-user applications and devices, which slows deployment and leaves platforms vulnerable to reverse engineering. But armed with GenAI, attackers can bypass these systems using AI-generated scripts to mimic human behaviour. These solutions also struggle to protect API-to-API communications that are used by gig economy businesses to handle transactions and its these machine-to-machine interactions that are most susceptible to GenAI-powered attacks. Plus, these solutions can’t recognise the complex patterns and subtle behaviours of a GenAI-enabled attack which will often emulate a human.
There are some specific businesses that are risk from GenAI attacks against their APIs. These include ride sharing and delivery platforms that use APIs to facilitate real-time matching between drivers and customers. They can expect to see GenAI-powered attacks that carry out advanced scraping to extract pricing data, or AI-powered bots to simulate customer requests, overwhelming the platform’s systems.
Freelance job posting sites are also at risk. Here, AI can be used to create fake job postings and manipulate proposals or to automate the scraping of sensitive freelancer information, enabling competitors to undercut prices or steal business. Similarly, online staffing agencies could see GenAI used to automate job application fraud or hijack worker accounts, submitting fraudulent claims for job completions or manipulating availability slots.
On demand service providers such as those that provide a tradesman can also be abused. They use APIs to manage job postings, worker profiles, and payments. Bots could be used to create fraudulent service requests, fake customer reviews, or manipulate the rating system, undermining customer/client trust. And online learning and tutoring platforms can GenAI used to fake tutoring sessions, manipulate payment structures, or abuse refund systems, all through APIs that handle transactions, communications, and scheduling.
Such attacks could prove devastating, damaging customer trust, causing loss of revenue and ceding market share so it’s vital these gig economy businesses protect their platforms. This means getting ahead of the curve and mitigating the risk of such attacks. This can best be achieved by protecting the APIs that underpin these exchanges and through bot detection and management.
Countering attack types
If we consider competitive scraping, for instance, whereby AI-enabled bots can accumulate data on pricing, profiles, and job postings at a rapid pace, enabling competitors to undercut the business, bot management can help address this. Using machine learning it’s possible to detect abnormal scraping patterns and block them in real-time.
ATO, which sees the launch of credential-stuffing attacks that use millions of login combinations to take over gig worker accounts, can also be addressed by using entity behaviour analytics. This technology recognises suspicious login attempts and proceeds to stops account takeover attempts.
But perhaps one of the most difficult attack types to stop is business logic abuse. This sees the API’s legitimate functionality abused and used for malicious purposes, such as to trigger refund requests or create fake bookings. Again, this type of activity can only be detected through monitoring and analysing entity behaviour which is outside the realms of application security.
Finally, fraudulent job postings and AI-created interactions can also flood platforms, overloading systems and frustrating legitimate users in a kind of denial-of-service attack. Using bot detection, it’s possible to identify fraudulent interactions versus genuine customer requests, protecting bona fide traffic.
Gig businesses need to take action sooner rather than later due to stark warnings from the National Cyber Security Centre (NCSC) which has warned that threat actors are beginning to harness data training sets and build out their AI capabilities. So far, it suggests GenAI has only been weaponised by well-funded threat actors but the trickle-down effect will see the technology become more accessible and used in a more widespread fashion as we begin to see it marketed -as-a-service on the dark web. So, make no mistake, these attacks are coming.
For the gig economy this should be a wake-up call to the need for unified, scalable API security. Protecting gig economy infrastructure from attack, particularly from GenAI, by using purpose built solutions rather than legacy application security must become a priority or we risk these businesses and their economic contribution in terms of revenue and jobs being destabilised.