Singapore's Aicadium acquires BasisAI for global AI ambitions
By Celine Chen
SINGAPORE, NNA - Launched only over two weeks ago, Aicadium, a global technology company founded by Singapore's Temasek Holdings Ltd, has acquired BasisAI, a local artificial intelligence startup that it has backed.
Known as a provider of scalable and responsible artificial intelligence (AI) solutions to some global companies, BasisAI has merged with Aicadium in a deal of undisclosed amount.
The enlarged Aicadium will expand operations to serve a global enterprise customer base as the global AI market is poised to jump from $29.86 billion in 2020 to $299.64 billion by 2026 at a significant compound annual growth rate of 35.6 percent, according to Facts and Factors market research report.
Aicadium's expansion also comes at a time when more IT-driven companies are realizing the competitive advantage of having low-risk AI systems that are also likely to be basically compliant with clearer regulations expected to be introduced across the world.
BasisAI was co-founded in 2018 by Liu Feng-Yuan, who was previously a chief data scientist with the Singapore government, and twin brothers Linus Lee and Silvanus Lee, both top data scientists who had helmed units in global tech companies like Twitter, Uber and Dropbox.
Their mission is to build an AI powerhouse to help businesses accelerate their digitalization in the Asia Pacific region. In 2019, BasisAI managed to raise $6 million in seed funding from Temasek and Sequoia India.
A year later, the startup launched its proprietary end-to-end machine learning platform, Bedrock, which enables organizations to deploy responsible AI solutions swiftly.
Among its clients are the Singapore government, top Singapore banks DBS, UOB and OCBC, Singapore Airlines and transport conglomerate ComfortDelGro. BasisAI also partnered with global companies like Accenture, PwC, and AWS.
Aicadium was launched on August 12 as Temasek's global center of excellence in artificial intelligence. It aims to collaborate with Temasek portfolio companies and other enterprises seeking to develop and scale end-to-end AI solutions to improve business outcomes.
Supported by an international team of data scientists and engineers based in Singapore and California, Aicadium leverages on "a repeatable and scalable process, and a platform of AI algorithms, models and tools to help clients achieve operational AI within their organizations", said their press release on August 31.
Liu, CEO of BasisAI, said, "This acquisition allows us to accelerate our growth strategy, broaden our reach and expand our offering that is built on the strong foundation of a powerful AI platform and a combined team of talent. We are excited to be joining forces with Aicadium to augment our strength in building responsible AI systems."
Liu will assume an executive leadership position at Aicadium together with BasisAI co-founder Linus Lee.
Michael Zeller, head of AI strategy and solutions at Temasek, is heartened that the BasisAI team shares Aicadium's determination to deliver AI solutions that are "responsibly and ethically designed, developed, and deployed".
In its insight article on August 10, McKinsey & Company said, "As artificial intelligence becomes increasingly embedded in the fabric of business and our everyday lives, both corporations and consumer-advocacy groups have lobbied for clearer rules to ensure that it is used fairly."
In May, the European Union became the first governmental body in the world to issue a comprehensive response in the form of draft regulations aimed specifically at the development and use of AI.
While the proposed regulations would apply to any AI system used or providing outputs within the European Union, the move would have implications for organizations around the world, said McKinsey, adding that it would be "just one more step toward global AI regulation".
The company said its research shows many organizations still have a lot of work to do to prepare themselves for the impending regulation and address the risks associated with AI more broadly.
In 2020, only 48 percent of organizations surveyed said they recognized regulatory-compliance risks, and even fewer or 38 percent said they were actively working to address them.
Far smaller proportions of the companies surveyed were aware of glaring risks, such as those concerning reputation, privacy, and fairness.
"These statistics are alarming given the well-publicized incidents in which AI has gone awry and because AI-related regulations, such as Europe’s General Data Protection Regulation (GDPR), already exist in parts of the world," said McKinsey.
AI has presented tremendous opportunities for advancements in technology and society, revolutionizing how organizations create value in products and industries, from healthcare and e-commerce to mining and financial services.
But companies need to address the risks of the technology in order for them to continue to innovate with AI at the momentum required to remain competitive and reap the biggest return on their AI investments.
Backed by its findings, Mckinsey said companies seeing the highest returns from AI are far more likely to report that they are engaged in active risk mitigation than are those whose results are less impressive.
EU has classified risks into three categories.
Unacceptable risks are subliminal, manipulative and exploitative techniques that cause harm and remote biometric identification systems used in public for law enforcement.
Under the category of high risks are systems assessing credit-worthiness, utilizing biometric identification in non-public spaces, subjecting people to health risks when they fail, as well as the administration of justice.
The third category covers limited or minimal risks such as AI-enabled chatbot, video and computer games, spam filters, systems for inventory management, and consumer and market segregations.
Writing in a blog on BasisAI website, April Chin, head of its strategic partnerships, said the proposed EU regulation will undoubtedly serve as a reference point for other future regulations globally.
She said, "Regulatory frameworks regarding AI, will only continue to grow, especially as we attempt to prevent AI failures. The key will be to have a mix of regulation and technical standards, that is focused on the highest standards for AI development, while also ensuring progress in innovation."
However, the AI industry lacks proven methods to translate AI governance principles into practice.
"Many governments, multisectoral groups and private sector companies have introduced ‘trustworthy’ AI principles. But many are still struggling to roll-out these principles into concrete technical standards,"said Chin.
These standards also need to be crafted in the context of industry and use cases.
"For example, there are 21 technical definitions of fairness. Depending on whether you take the perspective of the decision maker, society or end user, the metrics for fairness will differ. The multiplicity of contexts, stakeholders and applications need to be considered when designing meaningful technical standards," she said.