‘Orwellian’ AI lie detector project challenged in EU court

A legal challenge was heard today in Europe’s Court of Justice in relation to a controversial EU-funded research project using artificial intelligence for facial “lie detection” with the aim of speeding up immigration checks.

The transparency lawsuit against the EU’s Research Executive Agency (REA), which oversees the bloc’s funding programs, was filed in March 2019 by Patrick Breyer, MEP of the Pirate Party Germany and a civil liberties activist — who has successfully sued the Commission before over a refusal to disclose documents.

He’s seeking the release of documents on the ethical evaluation, legal admissibility, marketing and results of the project. And is hoping to set a principle that publicly funded research must comply with EU fundamental rights — and help avoid public money being wasted on AI “snake oil” in the process.

“The EU keeps having dangerous surveillance and control technology developed, and will even fund weapons research in the future, I hope for a landmark ruling that will allow public scrutiny and debate on unethical publicly funded research in the service of private profit interests”, said Breyer in a statement following today’s hearing. “With my transparency lawsuit, I want the court to rule once and for all that taxpayers, scientists, media and Members of Parliament have a right to information on publicly funded research — especially in the case of pseudoscientific and Orwellian technology such as the ‘iBorderCtrl video lie detector’.”

The court has yet to set a decision date on the case but Breyer said the judges questioned the agency “intensively and critically for over an hour” — and revealed that documents relating to the AI technology involved, which have not been publicly disclosed but had been reviewed by the judges, contain information such as “ethnic characteristics”, raising plenty of questions.

The presiding judge went on to query whether it wouldn’t be in the interests of the EU research agency to demonstrate that it has nothing to hide by publishing more information about the controversial iBorderCtrl project, per Breyer.

AI ‘lie detection’

The research in question is controversial because the notion of an accurate lie detector machine remains science fiction, and with good reason: There’s no evidence of a “universal psychological signal” for deceit.

Yet this AI-fuelled commercial R&D “experiment” to build a video lie detector — which entailed testers being asked to respond to questions put to them by a virtual border guard as a webcam scanned their facial expressions and the system sought to detect what an official EC summary of the project describes as “biomarkers of deceit” in an effort to score the truthfulness of their facial expressions (yes, really????) — scored over €4.5 million/$5.4 million in EU research funding under the bloc’s Horizon 2020 scheme.

The iBorderCtrl project ran between September 2016 and August 2019, with the funding spread between 13 private or for-profit entities across a number of Member States (including the U.K., Poland, Greece and Hungary).

Public research reports the Commission said would be published last year, per a written response to Breyer’s questions challenging the lack of transparency, do not appear to have seen the light of day yet.

Back in 2019 The Intercept was able to test out the iBorderCtrl system for itself. The video lie detector falsely accused its reporter of lying — judging she had given four false answers out of 16, and giving her an overall score of 48, which it reported that a policeman who assessed the results said triggered a suggestion from the system she should be subject to further checks (though was not as the system was never run for real during border tests).

The Intercept said it had to file a data access request — a right that’s established in EU law — in order to obtain a copy of the reporter’s results. Its report quoted Ray Bull, a professor of criminal investigation at the University of Derby, who described the iBorderCtrl project as “not credible” — given the lack of evidence that monitoring microgestures on people’s faces is an accurate way to measure lying.

“They are deceiving themselves into thinking it will ever be substantially effective and they are wasting a lot of money. The technology is based on a fundamental misunderstanding of what humans do when being truthful and deceptive”, Bull also told it.

The notion that AI can automagically predict human traits if you just pump in enough data is distressingly common — just look at recent attempts to revive phrenology by applying machine learning to glean “personality traits” from face shape. So a face-scanning AI “lie detector” sits in a long and ignoble anti-scientific “tradition”.

In the 21st century it’s frankly incredible that millions of euros of public money are being funnelled into rehashing terrible old ideas — before you even consider the ethical and legal blindspots inherent in the EU funding research that runs counter to fundamental rights set out in the EU’s charter. When you consider all the bad decisions involved in letting this fly it looks head-hangingly shameful.

The granting of funds to such a dubious application of AI also appears to ignore all the (good) research that has been done showing how data-driven technologies risk scaling bias and discrimination.

We can’t know for sure, though, because only very limited information has been released about how the consortia behind iBorderCtrl assessed ethics considerations in their experimental application — which is a core part of the legal complaint.

The challenge in front of the European Court of Justice in Luxembourg poses some very awkward questions for the Commission: Should the EU be pouring taxpayer cash into pseudoscientific “research”? Shouldn’t it be trying to fund actual science? And why does its flagship research program — the jewel in the EU crown — have so little public oversight?

The fact that a video lie detector made it through the EU’s “ethics self-assessment” process, meanwhile, suggests the claimed “ethics checks” aren’t worth a second glance.

“The decision on whether to accept [an R&D] application or not is taken by the REA after Member States representatives have taken a decision. So there is no public scrutiny, there is no involvement of parliament or NGOs. There is no [independent] ethics body that will screen all of those projects. The whole system is set up very badly”, says Breyer.

“Their argument is basically that the purpose of this R&D is not to contribute to science or to do something for public good or to contribute to EU policies but the purpose of these programs really is to support the industry — to develop stuff to sell. So it’s really supposed to be an economical program, the way it has been devised. And I think we really actually need a discussion about whether this is right, whether this should be so”.

“The EU’s about to regulate AI and here it is actually funding unethical and unlawful technologies”, he adds.

No external ethics oversight

Not only does it look hypocritical for the EU to be funding rights-hostile research but — critics contend — it’s a waste of public money that could be spent on genuinely useful research (be it for a security purpose or, more broadly, for the public good; and for furthering those “European values” EU lawmakers love to refer to).

“What we need to know and understand is that research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that actually wastes money for other programs that would be really important and useful”, argues Breyer.

“For example in the security program you could maybe do some good in terms of police protective gear. Or maybe in terms of informing the population in terms of crime prevention. So you could do a lot of good if these means were used properly — and not on this dubious technology that will hopefully never be used”.

The latest incarnation of the EU’s flagship research and innovation program, which takes over from Horizon 2020, has a budget of ~€95.5 billion for the 2021-2027 period. And driving digital transformation and developments in AI are among the EU’s stated research funding priorities. So the pot of money available for “experimental” AI looks massive.

But who will be making sure that money isn’t wasted on algorithmic snake oil — and dangerous algorithmic snake oil in instances where the R&D runs so clearly counter to the EU’s own charter of fundamental human rights?

The European Commission declined multiple requests for spokespeople to talk about these issues but it did send some on the record points (below), and some background information regarding access to documents which is a key part of the legal complaint.

Among the Commission’s on the record statements on “ethics in research”, it started with the claim that “ethics is given the highest priority in EU funded research”.

“All research and innovation activities carried out under Horizon 2020 must comply with ethical principles and relevant national, EU and international law, including the Charter of Fundamental Rights and the European Convention on Human Rights”, it also told us, adding: “All proposals undergo a specific ethics evaluation which verifies and contractually obliges the compliance of the research project with ethical rules and standards”.

It did not elaborate on how a “video lie detector” could possibly comply with EU fundamental rights — such as the right to dignity, privacy, equality and non-discrimination.

And it’s worth noting that the European Data Protection Supervisor (EDPS) has raised concerns about misalignment between EU-funded scientific research and data protection law, writing in a preliminary opinion last year: “We recommend intensifying dialogue between data protection authorities and ethical review boards for a common understanding of which activities qualify as genuine research, EU codes of conduct for scientific research, closer alignment between EU research framework programmes and data protection standards, and the beginning of a debate on the circumstances in which access by researchers to data held by private companies can be based on public interest”.

On the iBorderCtrl project specifically the Commission told us that the project appointed an ethics advisor to oversee the implementation of the ethical aspects of research “in compliance with the initial ethics requirement”. “The advisor works in ways to ensure autonomy and independence from the consortium”, it claimed, without disclosing who the project’s (self-appointed) ethics advisor is.

“Ethics aspects are constantly monitored by the Commission/REA during the execution of the project through the revision of relevant deliverables and carefully analysed in cooperation with external independent experts during the technical review meetings linked to the end of the reporting periods”, it went on, adding that: “A satisfactory ethics check was conducted in March 2019”.

It did not provide any further details about this self-regulatory “ethics check”.

“The way how it works so far is basically some expert group that the Commission sets up with propose/call for tender”, says Breyer, discussing how the EU’s research program is structured. “It’s dominated by industry experts, it doesn’t have any members of parliament in there, it only has — I think — one civil society representative in it, so that’s falsely composed right from the start. Then it goes to the Research Executive Agency and the actual decision is taking by representatives of the Member States.

“The call [for research proposals] itself doesn’t sound so bad if you look it up — it’s very general — so the problem really was the specific proposal that they proposed in response to it. And these are not screened by independent experts, as far as I understand it. The issue of ethics is dealt with by self assessment. So basically the applicant is supposed to indicate whether there is a high ethical risk involved in the project or not. And only if they indicate so will experts — selected by the REA — do an ethics assessment.

“We don’t know who’s been selected, we don’t know their opinions — it’s also being kept secret — and if it turns out later that a project is unethical it’s not possible to revoke the grant”.

The hypocrisy charge comes in sharply here because the Commission is in the process of shaping risk-based rules for the application of AI. And EU lawmakers have been saying for years that artificial intelligence technologies need “guardrails” to make sure they’re applied in line with regional values and rights.

Commission EVP Margrethe Vestager has talked about the need for rules to ensure artificial intelligence is “used ethically” and can “support human decisions and not undermine them”, for example.

Yet EU institutions are simultaneously splashing public funds on AI research that would clearly be unlawful if implemented in the region, and which civil society critics decry as obviously unethical given the lack of scientific basis underpinning “lie detection”.

In an FAQ section of the iBorderCtrl website, the commercial consortia behind the project concedes that real-world deployment of some of the technologies involved would not be covered by the existing EU legal framework — adding that this means “they could not be implemented without a democratic political decision establishing a legal basis”.

Or, put another way, such a system would be illegal to actually use for border checks in Europe without a change in the law. Yet European taxpayer funding was nonetheless ploughed in.

A spokesman for the EDPS declined to comment on Breyer’s case specifically but he confirmed that its preliminary opinion on scientific research and data protection is still relevant.

He also pointed to further related work which addresses a recent Commission push to encourage pan-EU health data sharing for research purposes — where the EDPS advises that data protection safeguards should be defined “at the outset” and also that a “thought through” legal basis should be established ahead of research taking place.

The EDPS recommends paying special attention to the ethical use of data within the [health data sharing] framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation”, the EU’s chief data supervisor writes, adding that he’s “convinced that the success of the [health data sharing plan] will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights”.

tl;dr: Legal and ethical use of data must be the DNA of research efforts — not a check-box afterthought.

Unverifiable tech

In addition to a lack of independent ethics oversight of research projects that gain EU funding, there is — currently and worryingly for supposedly commercially minded research — no way for outsiders to independently verify (or, well, falsify) the technology involved.

In the case of the iBorderCtrl tech no meaningful data on the outcomes of the project has been made public and requests for data sought under freedom of information law have been blocked on commercial interest grounds.

Breyer has been trying without success to obtain information about the results of the project since it finished in 2019. The Guardian reported in detail on his fight back in December.

Under the legal framework wrapping EU research he says there’s only a very limited requirement to publish information on project outcomes — and only long after the fact. His hope is thus that the Court of Justice will agree “commercial interests” can’t be used to over-broadly deny disclosure of information in the public interest.

“They basically argue there is no obligation to examine whether a project actually works so they have the right to fund research that doesn’t work”, he tells TechCrunch. “They also argue that basically it’s sufficient to exclude access if any publication of the information would damage the ability to sell the technology — and that’s an extremely wide interpretation of commercially sensitive information.

“What I would accept is excluding information that really contains business secrets like source code of software programs or internal calculations or the like. But that certainly shouldn’t cover, for example, if a project is labelled as unethical. It’s not a business secret but obviously it will harm their ability to sell it — but obviously that interpretation is just outrageously wide”.

“I’m hoping that this [legal action] will be a precedent to clarify that information on such unethical — and also unlawful if it were actually used or deployed — technologies, that the public right to know takes precedence over the commercial interests to sell the technology”, he adds. “They are saying we won’t release the information because doing so will diminish the chances of selling the technology. And so when I saw this then I said well it’s definitely worth going to court over because they will be treating all requests the same”.

Civil society organizations have also been thwarted in attempts to get detailed information about the iBorderCtrl project. The Intercept reported in 2019 that researchers at the Milan-based Hermes Center for Transparency and Digital Human Rights used freedom of information laws to obtain internal documents about the iBorderCtrl system, for example, but the hundreds of pages they got back were heavily redacted — with many completely blacked out.

“I’ve heard from [journalists] who have tried in vain to find out about other dubious research projects that they are massively withholding information. Even stuff like the ethics report or the legal assessment — that’s all stuff that doesn’t contain any commercial secrets, as such”, Breyer continues. “It doesn’t contain any source code, nor any sensitive information — they haven’t even released these partially.

“I find it outrageous that an EU authority [the REA] will actually say we don’t care what the interest is in this because as soon as it could diminish sales then we will withhold the information. I don’t think that’s acceptable, both in terms of taxpayers’ interests in knowing about what their money is being used for but also in terms of the scientific interest in being able to test/to verify these experiments on the so called ‘deception detection’ — which is very contested if it really works. And in order to verify or falsify it scientists of course need to have access to the specifics about these trials.

“Also democratically speaking if ever the legislator wants to decide on the introduction of such a system or even on the framing of these research programs we basically need to know the details — for example what was the number of false positives? How well does it really work? Does it have a discriminatory effect because it works less well on certain groups of people such as facial recognition technology. That’s all stuff that we really urgently need to know”.

Regarding access to documents related to EU-funded research the Commission referred us to Regulation no. 1049/2001 — which it said “lays down the general principles and limits” — though it added that “each case is analysed carefully and individually”.

However the Commission’s interpretation of the regulations of the Horizon program appears to entirely exclude the application of the freedom of information — at least in the iBorderCtrl project case.

Per Breyer, they limit public disclosure to a summary of the research findings — that can be published some three or four years after the completion of the project.

“You’ll see an essay of five or six pages in some scientific magazine about this project and of course you can’t use it to verify or falsify the technology”, he says. “You can’t see what exactly they’ve been doing — who they’ve been talking to. So this summary is pretty useless scientifically and to the public and democratically and it takes ages. So I hope that in the future we will get more insight and hopefully a public debate”.

The EU research program’s legal framework is secondary legislation. So Breyer’s argument is that a blanket clause about protecting “commercial interests” should not be able to trump fundamental EU rights to transparency. But of course it will be up to the court to decide.

“I think I stand some good chance especially since transparency and access to information is actually a fundamental right in the EU — it’s in the EU charter of fundamental rights. And this Horizon legislation is only secondary legislation — they can’t deviate from the primary law. And they need to be interpreted in line with it”, he adds. “So I think the court will hopefully say that this is applicable and they will do some balancing in the context of the freedom of information which also protects commercial information but subject to prevailing public interests. So I think they will find a good compromise and hopefully better insight and more transparency.

“Maybe they’ll blacken out some parts of the document, redact some of it but certainly I hope that in principle we will get access to that. And thereby also make sure that in the future the Commission and the REA will have to hand over most of the stuff that’s been requested on this research. Because there’s a lot of dubious projects out there”.

A better system of research project oversight could start by having the committee that decides on funding applications not being comprised of mostly industry and EU Member State representatives (who of course will always want EU cash to come to their region) — but also parliamentary representatives, more civil society representatives and scientists, per Breyer.

“It should have independent participants and those should be the majority”, he says. “That would make sense to steer the research activities in the direction of public good, of compliance with our values, of useful research — because what we need to know and understand is research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that wastes money for other programs that would be really important and useful”.

He also points to a new EU research program being set up that’s focused on defence — under the same structure, lacking proper public scrutiny of funding decisions or information disclosure, noting: “They want to do this for defence as well. So that will be even about lethal technologies”.

To date the only disclosures around iBorderCtrl have been a few parts of the technical specifications of its system and some of a communications report, per Breyer, who notes that both were “heavily redacted”.

“They don’t say for example which border agencies they have introduced this system to, they don’t say which politicians they’ve been talking to”, he says. “The interesting thing actually is that part of this funding is also presenting the technology to border authorities in the EU and politicians. Which is very interesting because the Commission keeps saying look this is only research; it doesn’t matter really. But in actual fact they are already using the project to promote the technology and the sales of it. And even if this is never used at EU borders funding the development will mean that it could be used by other governments — it could be sold to China and Saudi Arabia and the like.

“And also the deception detection technology — the company that is marketing it [a Manchester-based company called Silent Talker Ltd] — is also offering it to insurance companies, or to be used on job interviews, or maybe if you apply for a loan at a bank. So this idea that an AI system would be able to detect lies risks being used in the private sector very broadly and since I’m saying that it doesn’t work at all and it’s basically a lottery lots of people risk having disadvantages from this dubious technology”.

“It’s quite outrageous that nobody prevents the EU from funding such ‘voodoo’ technology”, he adds.

The Commission told us that “The Intelligent Portable Border Control System” (aka iBorderCtrl) “explored new ideas on increasing efficiency, convenience and security of land border crossing”, and like all security research projects it was “aimed at testing new ideas and technologies to address security challenges”.

“iBorderCtrl was not expected to deliver ready-made technologies or products. Not all research projects lead to the development of technologies with real-world applications. Once research projects are over, it is up to Member States to decide whether they want to further research and/or develop solutions studied by the project”, it also said. 

It also pointed out that specific application of any future technology “will always have to respect EU and national law and safeguards, including on fundamental rights and the EU rules on the protection of personal data”.

However, Breyer also calls foul on the Commission seeking to deflect public attention by claiming ‘it’s only R&D’ or that it’s not deciding on the use of any particular technology. “Of course factually it creates pressure on the legislator to agree to something that has been developed if it turns out to be useful or to work”, he argues. “And also even if it’s not used by the EU itself it will be sold somewhere else — and so I think the lack of scrutiny and ethical assessment of this research is really scandalous. Especially as they have repeatedly developed and researched surveillance technologies — including mass surveillance of public spaces”.

“They have projects on Internet on bulk data collection and processing of Internet data. The security program is very problematic because they do research into interferences with fundamental rights — with the right to privacy”, he goes on. “There are no limitations really in the program to rule out unethical methods of mass surveillance or the like. And not only are there no material limitations but also there is no institutional set-up to be able to exclude such projects right from the beginning. And then even once the programs have been devised and started they will even refuse to disclose access to them. And that’s really outrageous and as I said I hope the court will do some proper balancing and provide for more insight and then we can basically trigger a public debate on the design of these research schemes”.

Pointing again to the Commission’s plan to set up a defence R&D fund under the same industry-centric decision-making structure — with a “similarly deficient ethics appraisal mechanism” — he notes that while there are some limits on EU research being able to fund autonomous weapons, other areas could make bids for taxpayer cash — such as weapons of mass destruction and nuclear weapons.

“So this will be hugely problematic and will have the same issue of transparency, all the more of course”, he adds.

On transparency generally, the Commission told us it “always encourages projects to publicise as much as possible their results”. While, for iBorderCtrl specifically, it said more information about the project is available on the CORDIS website and the dedicated project website.

If you take the time to browse to the “publications” page of the iBorderCtrl website you’ll find a number of “deliverables” — including an “ethics advisor”; the “ethic’s advisor’s first report”; an “ethics of profiling, the risk of stigmatization of individuals and mitigation plan”; and an “EU wide legal and ethical review report” — all of which are listed as “confidential”.

Why these co-founders turned their sustainability podcast into a VC-backed business

When Laura Wittig and Liza Moiseeva met as guests on a podcast about sustainable fashion, they jibed so well together that they began one of their own: Good Together. Their show’s goal was to provide listeners with a place to learn how to be eco-conscious consumers, but with baby steps.

Wittig thinks the non-judgmental environment (one that doesn’t knock on a consumer for not being zero-waste overnight) is the show’s biggest differentiator. “Then, people were emailing us and asking how they can be on our journey beyond being a listener,” Wittig said. Now, over a year after launching the show, the co-hosts are turning validation from listeners into the blueprint for a standalone business: Brightly.

Brightly is a curated platform that sells vetted eco-friendly goods and shares tips about conscious consumerism. While the startup is launching with more than 200 products from eco-friendly brands, such as Sheets & Giggles and Juice Beauty, the long-term vision is to start their own commerce brand of Brightly-branded products. The starting lineup will include two to four products in the home space.

To get those products out by the holiday season, Brightly tells TechCrunch that it has raised $1 million in venture funding from investors, including Tacoma Venture Fund, Keeler Investments, Odile Roujol (a FAB Ventures backer and former L’Oréal CEO) and Female Founder’s Alliance.

The funding caps off a busy 12 months for Brightly. The startup has gone through Snap’s Yellow accelerator, an in-house effort from the social media company that began in 2018. As part of the program Snap invests $150,000 in each Yellow startup for an equity stake. The company also did Ready Set Raise, an equity-free accelerator put on by Female Founders Alliance, in the fall.

With new funding, Brightly is seeking to take a Glossier-style approach to become the next big brand in commerce: gather a community by recommending great products, then turn the strategy on its head and make your superfans buy in-house products under the same brand.

“We have access to a community of women who are beating our door down to shop directly with us and have exclusive products made for them,” Wittig said.

Brightly wants to be more than a “boring storefront” one could quickly whip up on Shopify or Amazon, Wittig says.

The company’s curation process, which every product goes through before being listed on the platform, is extensive. The startup makes sure that every product is created with sustainable and ethical supply chain processes and sustainable material. The team also interviews every brand’s founders to understand the genesis of any product that lives on the Brightly platform. The co-founders also weigh the durability and longevity of products, adopting what Wittig sees as a “Wirecutter approach.”

“It’s more like, ‘why would we pick an ethically produced leather handbag over something that might be made not from leather but wouldn’t last too long necessarily,’ ” she said. “These are the conversations we have with our audience, because the term eco-friendly is very much our grayscale.”

Image Credits: Brightly

More than 250,000 people come to Brightly, either through their app or website, every day, according to Wittig. The startup monetizes largely through brand partnerships and getting those users in front of paid products.

Image Credits: Brightly

The monetization strategy is similar to what you might find a podcast use: affiliate links or product placement mid-episode. But while the co-founders are relying on this strategy right now, they see the opportunity to create their own e-commerce company as larger and more lucrative.

“The billion-dollar opportunity is not with that,” Wittig said. “The value will be going direct commerce and selling our picks of ethical sustainable goods.”

Marking the transition from podcasting about eco-friendly goods to creating them in-house is a strong pivot. The co-founders consider creating a distribution commerce channel to be a larger opportunity and likely more lucrative than the podcasting business.

Beyond creating a line of their own products, Brightly is thinking about how to partner with white-label sustainable products. Another option, Wittig said, is to partner with big corporations to get products on their shelves with colors and customization for Brightly. An example of an ideal partnership would be Reformation’s recent partnership with Blueland.

Wittig declined to share more details on how they plan to win, but likened the strategy to that of Goop or Glossier, two companies that started with content arms and drew their community into a commerce platform.

“It’s not going to be a Thrive Market where there are hundreds and thousands of sustainable goods on there. It’s going to be much more curated,” she said.

COVID-19 has helped the startup further validate the need for a platform that unites a conscious consumer community.

“We are all so aware of the purchasing power we have,” she said. “As consumers we go out and support small businesses by getting coffee on the go. But before, we did not think twice about getting everything from Amazon.”

The conversation with investors hasn’t been as simple, the co-founder said. Investors continue to be “hands off” about community-based platforms because they are unsure it will work. Wittig says that many bearish investors have placed bets on singular direct-to-consumer brands, such as Away or Blueland.

“Those investors know the rising costs of customer acquisition, and see what happens when you don’t have a community that surrounds our business,” she said.

Brightly is betting that the future of commerce brands has to start with a go-to-market, and then bring in the end-product, instead of the other way. The end goal here for Brightly is attracting, and generating excitement from, Gen Z and millennial shoppers. To do so, Wittig says that Brightly is experimenting with ways to implement socialization aspects into the shopping experience.

Leslie Feinzaig, the founder of Female Founders Alliance, said that what’s special about Brightly is that it “demonstrated demand before building for it.”

“I think a lot of people today could build software to connect people and sell things, but very few people could get thousands of fanatical followers to actually engage with each other and make that software useful,” Feinzaig said. “Brightly built that community with matchsticks and tape.”

GajiGesa, a fintech startup serving underbanked Indonesian workers, raises $2.5 million seed round

GajiGesa, a fintech company that offers Earned Wage Access (EWA) and other services for workers in Indonesia, has raised $2.5 million in seed funding. The round was co-led by Defy.vc and Quest Ventures. Other participants included GK Plug and Play, Next Billion Ventures, Alto Partners Multi-Family Office, Kanmo Group and strategic angel investors.

The company was founded last year by husband-and-wife team Vidit Agrawal and Martyna Malinowska. Agrawal was Uber’s first employee in Asia and has also served in leadership positions at Carro and Stripe. Malinowska led product development at Standard Chartered’s SC Ventures and alternative credit-scoring platform LenddoEFL.

About 66% of Indonesia’s 260 million population is “unbanked,” which means they don’t have a bank account and limited access to financial services like loans. Agrawal and Malinowska decided to launch GajiGesa in Indonesia because Malinowska worked with many unbanked workers while at LenddoEFL. While at Uber, Agrawal also worked with drivers across Southeast Asia whose average earnings were $250 USD a month (excluding Singapore), and he said the top issue they face was harassment by money lenders.

Screenshots showing how GajiGesa's app works. GajiGesa is a startup that offers earned wage access and other services to Indonesian workers.

GajiGesa’s app

“These hardworking Indonesians had no fair or formal sources for easy access to capital. Further, the most common reason for borrowing was short-term liquidity issues,” Agrawal told TechCrunch. “But workers were forced to borrow either long-term, high ticket size loans or short-term loans with exorbitantly high-interest rates.”

Having immediate access to earned wages, instead of waiting for a semi-monthly or monthly paycheck, can help alleviate financial stress and make it easier for workers to manage their income and handle emergencies. Companies that have started instant payment services for workers in other countries include Square, London-based startup Wagestream and Gusto.

Since launching in October 2020, GajiGesa has added over 30 employers on its platform, serving tens of thousand of workers in total. It integrates into a company’s existing human resources management and payroll systems. Workers can get earned wages immediately, track earnings, pay bills, buy prepaid cards and access financial education resources through an app.

GajiGesa does not charge interest rates or require collateral, since all its users are pre-approved by their employers. Companies decide to charge fees or offer GajiGesa as part of their benefits packages, and also get access to analytics that can help them create targeted incentives or new benefits for their workforce.

During COVID-19, Agrawal said the startup has “seen insatiable demand and support for GajiGesa’s EWA solution from employers. This is partly attributed to the various challenges employers are facing due to the effects of COVID-19, but our platform is designed to support employers and employees in the long-term. The value of EWA and the other services we offer is not limited to the pandemic.”

Grab announces program to help increase COVID-19 vaccinations in Southeast Asia

Grab, the Southeast Asian ride-hailing and on-demand delivery giant, announced a program to increase access to COVID-19 vaccinations today. Its goal is to have all of its employees, as well as driver and delivery partners, vaccinated by 2022 (excluding people who are medically unable to receive shots). Grab also said it will work with governments to provide information about vaccines through its app, and is in discussions to provide last-mile vaccine distribution, and transportation to and from vaccination centers.

The company currently has operations in eight Southeast Asian countries: Singapore, Cambodia, Indonesia, Malaysia, Myanmar, the Philippines, Thailand and Vietnam. Grab joins a growing roster of private companies around the world that have offered to help governments with their vaccination programs. In the United States, these include tech companies like Microsoft, Oracle, Salesforce and Epic. Meanwhile, China’s largest ride-hailing company, Didi Chuxing, is pledging $10 million to support vaccination programs in 13 countries.

In a statement, Russell Cohen, Grab’s group managing director of operations, said, “The quicker we can achieve herd immunity, the sooner our communities and economies can start to rebuild. Public-private partnership has been critical in taking on some of the pandemic’s biggest battles, and this collaboration should continue.”

For drivers and delivery partners, Grab said it will subsidize COVID-19 vaccine costs not covered by national vaccination programs. The company will also extend its Group Prolonged Medical Leave insurance policy to cover income lost by drivers as a result of potential side effects from getting vaccinated. Employees and immediate family members will have any costs not covered by national programs paid for by Grab.

In terms of vaccine education, the Grab app will prominently display information from governments and health authorities, and run user surveys to help them understand public sentiment about COVID-19 vaccines. The company says its app has been downloaded more than 214 million times.

SquadCast adds video podcast recording

Remote video podcasting is surprisingly still something of a Wild West these days. Given the massive sums of money currently being pumped into the category — not to mention the fact that pretty much all podcasting became remote podcasting in 2020 — you’d expect there to be a more unified solution. Many still continue to rely on catchall teleconferencing software to get the job done (full disclosure: I’ve been recording my podcasts on Zoom during the pandemic).

It’s not for lack of trying of course. A number of companies are vying to become the Zoom or Skype of remote podcasts, including, notably, Zencastr. SquadCast is another big name in the space. The company claims a pretty big footprint, though, again, its primary competitors are still non-specialized video calling apps.

While those platforms are generally reliable and ubiquitous, they do have their drawbacks: namely recording quality. SquadCast’s big selling point has been higher-quality recording than those services where sound is something of an afterthought. This week, the Oakland-based company is adding video recording to the mix.

“We were on an upward trajectory before COVID, but demand during the pandemic has resulted in over 280% growth in both revenue and customer acquisition,” CEO Zachariah Moreno said in a release. “Since Video recording is our most requested feature by current, past, and prospective customers, it was the natural next step to continue to move the needle in virtual recording for podcast professionals.”

The platform is adding the feature for existing customers starting this quarter. The latest beta version of the software records video locally as separate files at 720p. Once the recording is over, it will convert the files into MP4 or WebM.

SquadCast’s version 3.0 beta with Studio Quality Video Recording consumer plans start at $40/month up to $300/month. Zencastr began rolling out its own video recording feature in beta over the summer.

Daily Crunch: TikTok will downrank ‘unsubstantiated’ claims

TikTok announces additional steps to fight misinformation, Myanmar’s military cracks down on Facebook and Google’s subsea cable goes online. This is your Daily Crunch for February 3, 2021.

The big story: TikTok will downrank ‘unsubstantiated’ claims

TikTok had already said it would try to reduce misinformation by removing videos flagged by fact checkers for including false information. Today it announced that it will go a step further by flagging videos where the fact checkers’ findings are inconclusive.

For example, the company said that there are cases where fact checkers cannot verify information in a video because events are still unfolding. Those “unsubstantiated” videos will then include a large banner, as well as an additional reminder prompt before users will be able to share them.

This feature is launching in the United States and Canada but will become available globally in “coming weeks.”

The tech giants

Myanmar military government orders telecom networks to temporarily block Facebook — The move comes after days of unrest in Myanmar, where earlier this week military took control of the country and declared a state of emergency for a year after detaining civilian leader Aung San Suu Kyi.

Google’s new subsea cable between the US and Europe is now online — The almost 4,000-mile cable has a total capacity of 250 terabits per second.

Instagram confirms it’s working on a ‘Vertical Stories’ feed — This could give the app a more TikTok-like feel.

Startups, funding and venture capital

Vivino raises $155M for wine recommendation and marketplace app — The app and the company behind it have been helping people enjoy better wine since 2010.

Good Eggs raises $100M and plans to launch in Southern California — Good Eggs says that in the past year, it has grown revenue to the nine figures (more than $100 million), hired more than 400 employees and nearly doubled its customer base.

Rocket.Chat raises $19M for its open-source approach to integrated enterprise messaging — The service is used by banks, the U.S. Navy, NGOs and other organizations to set up and run any variety of secure virtual communications services from one place.

Advice and analysis from Extra Crunch

Spotify Group Session UX teardown: The fails and their fixes — Essentially a “party mode,” the feature offers a way for participants to contribute to a collaborative playlist in real time and control what’s playing across everyone’s devices.

Edtech valuations aren’t skyrocketing, but investors see more exit opportunities — Thirteen VCs discuss how their deal-making has changed in the last year.

Deep Science: AIs with high class and higher altitudes — This roundup kicks off with a study looking at the relative positions of the U.S., EU and China in the AI “race.”

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

Global smartphone shipments expected to rebound 11% this year — New numbers from Gartner point to a rebound to pre-2020 levels.

Todd Rundgren is about to launch a geofenced virtual tour — Rundgren is staging the tour with support from NoCap, the livestreaming concert startup founded by musicians Cisco Adler and Donavon Frankenreiter.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Spotify hints toward plans for podcast subscriptions, à la carte payments

Spotify again signaled its interest in developing new ways to monetize its investments in podcasts. In the company’s fourth-quarter earnings, chief executive Daniel Ek suggested the streaming media company foresees a future where there will be multiple business models for podcasts, including, potentially, both ad-supported subscriptions and à la carte options.

The company, which also revealed its podcast catalog has grown to now 2.2 million programs, said it’s seen increasing demand for the audio format in recent months.

For example, 25% of Spotify’s monthly active users now engage with podcasts, up from 22% just last quarter. Podcast consumption is also increasing, with listening hours having nearly doubled year-over-year in the fourth quarter.

Today, podcasts on Spotify’s platform are available to both free and paid users and are monetized with ads. This is still a key focus for the company — Spotify even recently acquired a podcast hosting and monetization platform, Megaphone, to help make streaming ad insertion technology available to its third-party publishers while also growing its targetable podcast inventory.

But Spotify recently put its feelers out about different means of monetizing podcasts, too.

Late last year, for instance, the company was spotted running a survey that asked its customers if they would be willing to pay for a standalone podcast subscription, and if so what would it look like and how much would it cost?

At the time, the survey offered a few different concepts.

At the low end, a subscription could offer ad-supported exclusive episodes and bonus content for $3 per month. This would be similar to Stitcher Premium, which today provides exclusives from top shows and other bonus episodes. But Spotify’s suggested version included ads, while Stitcher Premium is ad-free.

A middle option suggested a plan that would be even closer to Stitcher Premium, with exclusive shows and bonus material but no ads. This even matched Stitcher Premium’s price of $5 per month. And at the high-end, subscribers could get early access to ad-free interviews and episodes for $8 per month.

Looks like the premium podcast plan would be ad-free and some mix of exclusive extra content at price points somewhere between $3-$8. pic.twitter.com/ArK8xYg0CM

— Andrew Wallenstein (@awallenstein) November 6, 2020

A survey, of course, is only meant to gauge consumer demand for such a subscription, and doesn’t indicate that Spotify has a new product in the works. (Spotify said the same when asked to comment on the news at the time.)

However, it’s clear that investors also want to know what Spotify is thinking when it comes to recouping its sizable investments in podcasts.

Asked if Spotify thought customers would be willing to pay for podcasts, Ek on the earnings call responded that he believed there were several new models that could be explored.

“I think we’re in the early days of seeing the long-term evolvement of how we can monetize audio on the internet. I’ve said this before, but I don’t believe that it’s a one-size-fits-all,” he said. “I believe, in fact, that we will have all business models, and that’s the future for all media companies — that you will have ad-supported subscriptions and à la carte sort of in the same space, of all media companies in the future.”

“And you should definitely expect Spotify to follow that strategy and that pattern,” Ek added, more definitively.

The answer seemed to indicate that Spotify is considering some of the ideas in that recent survey — of getting consumers to pay for some podcasts, instead of accessing them all for free or having them bundled into their music subscription.

Of course, that would change the meaning of the word “podcasts,” which today refers to freely distributed, serialized audio programs that get distributed via RSS feeds.

If Spotify chooses to paywall podcasts behind subscriptions or à la carte payments, then they’re no longer really podcasts — they’re a new sort of premium audio program.

This is an area where Spotify has plenty of room to grow, considering the significant investment it has made in podcasts over the years. To date, that’s included buying up content producers like Gimlet Media, The Ringer and Parcast, as well as signing top creators like Joe Rogan, Addison Rae, Kim Kardashian West, DC Comics, Michelle Obama and The Duke and Duchess of Sussex, among others. Spotify also bought podcast tools like Anchor and other ad technology and hosting services.

The advantage with podcasts is that Spotify has the ability to monetize them in multiple ways at once — with ads and subscriptions or direct payments, if it chose. And, of course, there are no licensing fees or royalties to contend with, as with streaming music.

Spotify could also adjust the podcast payments model as needed to fit its different geographies and the way customers around the world prefer to consume and pay for podcast content.

None of this thinking was about near-term launches, Ek also clarified.

“I think it’s early days, though, to specifically kind of look at how that could play out,” he said, talking about how the different models could take shape. “But, obviously, if that were to be the case, that revenue profile would be different than how we do music.”

CA Supreme Court denies lawsuit challenging Prop 22’s constitutionality

The California Supreme Court today shot down the lawsuit filed by a group of rideshare drivers in California and the Service Employees International Union that alleged Proposition 22 violates the state’s constitution.

“We are disappointed in the Supreme Court’s decision not to hear our case, but make no mistake: we are not deterred in our fight to win a livable wage and basic rights,” Hector Castellanos, a plaintiff in the case, said in a statement. “We will consider every option available to protect California workers from attempts by companies like Uber and Lyft to subvert our democracy and attack our rights in order to improve their bottom lines.”

The suit argued Prop 22 makes it harder for the state’s legislature to create and enforce a workers’ compensation system for gig workers. It also argues Prop 22 violates the rule that limits ballot measures to a single issue, as well as unconstitutionally defines what would count as an amendment to the measure. As it stands today, Prop 22 requires a seven-eighths legislative supermajority in order to amend the measure.

“We’re thankful, but not surprised, that the California Supreme Court has rejected this meritless lawsuit,” Jim Pyatt, a rideshare driver who advocated for Prop 22 and worked with the Yes on 22 campaign, said in a statement. “We’re hopeful this will send a strong signal to special interests to stop trying to undermine the will of voters who overwhelmingly stood with drivers to pass Proposition 22. The ballot measure was supported by nearly 60 percent of California voters across the political spectrum including hundreds of thousands of app-based drivers. It’s time to respect the vast majority of California voters as well as the drivers most impacted by Prop 22.”

How to contact TechCrunch

Got a tip? Contact us securely using SecureDrop. Find out more here. You can also reach this author via Signal at 415-516-5243

Meanwhile, Uber, Lyft and other companies have said they have their eyes on pursuing Prop 22-like legislation elsewhere. Given Uber and Lyft’s anti-gig-workers-as-employees stance, it came as no surprise when Uber and Lyft separately said they would pursue similar legislation in other parts of the country and the world.

Lyft, for example, has created external organizations that push for the independent contractor classification. Two of those organizations are Illinoisans for Independent Work and New Yorkers for Independent Work. Illinoisans for Independent Work was established in June and funded by Lyft with $1.2 million, according to committee filings. The stated purpose of the committee is “to support candidates who share the ideology of our organization and the value of independent work.”

But as we’ve previously discussed, the implementation of Prop 22 doesn’t mark the end of the battle for some gig workers to achieve employee status. There is a concerted effort to keep organizing this year, and getting ready to fight back wherever the next legislative battle emerges.

Myanmar military government orders telecom networks to temporarily block Facebook

Myanmar’s new military government has ordered local telecom firms to temporarily block Facebook until February 7 midnight, days after the military seized power in the Southeast Asian nation in a military coup.

Several users on Myanmar subreddit reported moments ago that Facebook was already inaccessible on their phones, suggesting that internet service providers had already started to comply with the order, which demanded compliance by midnight Wednesday. (It’s about 4.30 am Thursday in Myanmar at the time of writing.)

Myanmar’s new government alleges that Facebook is contributing to instability in the country and in its order has cited a section of the local telecom law that justifies many actions for the greater benefit of public and state.

NetBlocks, which tracks global internet usage, reports that MPT, a state-owned telecom operator that commands the market, has blocked Facebook as well as Messenger, Instagram and WhatsApp on its network.

A Facebook spokesperson said the company was “aware that access to Facebook is currently disrupted for some people.” The spokesperson added: “We urge authorities to restore connectivity so that people in Myanmar can communicate with their families and friends and access important information.”

BREAKING: Myanmar’s government is now blocking Facebook (including Instagram, WhatsApp, and Messenger) until Feb 7 at midnight.

Over 22 million people use Facebook in Myanmar, and it’s a critical tool for citizens to share information and organize. #KeepitOn https://t.co/E5Y46xuE7P

— Access Now (@accessnow) February 3, 2021

The move comes after days of unrest in Myanmar, where earlier this week military took control of the country and declared a state of emergency for a year after detaining civilian leader Aung San Suu Kyi and other democratically elected leaders of her ruling National League for Democracy. Following the coup, citizens in many parts of Myanmar had reported facing internet and cellular outages for several hours.

Facebook, which has become synonymous with the internet for citizens in Myanmar, has long been blamed for not doing enough to curb the spread of misinformation that prompted real-world violence in the country.

A human rights report in 2018 said that Facebook was used to “foment division and incite offline violence” in Myanmar. Later in the same year, Facebook executives agreed that they hadn’t done enough.

BuzzFeed News reported this week that Facebook executives have now pledged to take proactive content moderation steps in Myanmar, which they termed as “Temporary High-Risk Location.”

Social platform veteran Sriram Krishnan is Andreessen Horowitz’s latest general partner

Today, Andreessen Horowitz founder Marc Andreessen announced that social media product veteran Sriram Krishnan will be joining the firm as their latest general partner.

Krishnan, whose previous roles include stints at Snap, Facebook and Twitter, has gained a higher profile in recent weeks from his recurring audio show “The Good Time Show” on Clubhouse. His recent talk with Tesla CEO Elon Musk was something of a watershed moment for the audio chat platform driving plenty of new attention to the budding app.

This announcement follows a report in The Information regarding the hire earlier this week.

Krishnan’s hire comes at an interesting point for Andreessen Horowitz, the firm is at the center of plenty of chatter among media circles regarding their “go direct” content strategy. At the same time, a16z and its leadership have played an increasingly hard-nosed role in driving a broader backlash against tech media in recent years among founders and tech enthusiasts in their orbit. Krishnan has spent much of the past couple years building out his flirtations with “tech optimism” content with his interview newsletter “The Observer Effect,” his Clubhouse show and his prolific Twitter usage.

Broader “tech pessimism” among media outlets has, I think, partially been owed to a swift and outspoken shift in thinking regarding the societal responsibilities of social media platforms to more aggressively moderate the content they are surfacing on a global scale. Some of the partners at a16z, a Facebook backer, have been among the more vocal in pushing back on these critiques even as the executives at their portfolio companies have seemed more amenable to shift their thinking.

In his blog post, Andreessen notes that Krishnan will be joining the firm’s consumer team to invest in areas that include social.

Krishnan, well-regarded in tech circles, may play an important role at the firm as they approach more social investments in a world where the effects of rapidly scaled consumer platforms have become more understood. The firm and its partners have been throwing their full support behind Clubhouse in an aggressive push to promote the platform, flexing the firm’s celebrity connections and influence along the way as the platform quickly picks up millions of new users. Krishnan’s direct operator roles engaging with the product struggles of building platforms that responsibly scale will likely be an asset as the firm faces increased competition across an increasingly frothy venture market.

I believe I'm now supposed to say the words long expected of me.

*clears throat*

"How can I help?"

— Sriram Krishnan (@sriramk) February 3, 2021

Todd Rundgren is about to launch a geofenced virtual tour

The idea of a virtual concert tour might seem tailor-made for the pandemic, but musician Todd Rundgren said he’s actually been thinking about something like this for years.

Rundgren told me that he’d become frustrated with our “collapsing” air travel system — exacerbated by hurricanes and climate change — that increasingly left him “sitting somewhere, unable to get to my next gig.” So he was already convinced that he needed to “start imagining other ways” to reach audiences.

But it was in the context of COVID-19 that Rundgren finally decided to make it happen, with his Clearly Human Tour kicking off on February 14. He’d been planning for a traditional tour, but the dates kept getting pushed back due to the pandemic, until he finally told the organizers, “You have to let me do this. I can’t be committed to you and go two years without touring.”

Rundgren and his band will be performing entirely from Chicago, where they’ll play songs from across his career, as well as the entirety of his album “Nearly Human.” But the tour is taking place virtually across 25 U.S. cities, starting in Buffalo on February 14 and ending on March 22 in Seattle.

Rundgren said he found this more appealing than the idea of performing “one show and then blast it out to everybody.”

“People plan weeks or months in advance for this particular event, it attracts people from all over the metropolitan area or a particular region,” he said. “It’s a social event as much as anything else, and that’s what we are trying to do with the localization.”

That means performing live shows at 8 p.m., according to whatever the local time zone might be. Rundgren said the band will also try to “self-hypnotize” to get into the proper spirit: “We’ll dress all the walls with posters, sports team memorabilia … We’ll get food flown in from familiar local eateries.”

Other features include virtual meet and greets with local fans, as well as placing video screens around the concert venue to display virtual audience members. (There are a limited number of in-person tickets for sale as well, but obviously those attendees will need to be in Chicago.)

The concerts will be geofenced, although Rundgren said the approach has evolved — it’s less about limiting the Buffalo concert to Buffalo attendees, and more about enforcing geographic restrictions based on Rundgren’s contractual obligations. Or as he put it, “It’s less about enclosing an audience, and more about fencing them out.”

Clearly Human poster

Image Credits: Todd Rundgren

Rundgren is staging the tour with support from NoCap, the livestreaming concert startup founded by musicians Cisco Adler and Donavon Frankenreiter. NoCap has been around for less than a year, and Adler said that while it sold 700 tickets for its first show, it’s now selling “30, 40, 50 thousand tickets” per show. And he predicted that virtual concerts won’t be going away when the pandemic ends.

“There are all these underserved markets that you can visit once every five years, if that,” he said. “The future of this becomes a hybrid model.”

After all, he noted that televising sports has only made them “bigger and more global.” Similarly, when Adler was thinking about livestreaming concerts, he said, “I didn’t look at it as: How do we build a Band Aid and help everyone through this gap? It was more: How do we build a bridge to the other side of what music can be?”

Announcing the TC Early Stage Pitch-Off

Founders — by now you must have heard about TechCrunch Early Stage events on April 1 and 2 and July 8 and 9. The two-day founder and entrepreneur bootcamp brings together top experts to teach you how to get ahead and build a successful company. This year on the second day of each event we’re adding a twist — the Early Stage Pitch-Off. TechCrunch is on the hunt to showcase 10 early-stage startups to our global audience of investors, press and tech industry leaders. Apply here for the April 2 Early Stage Pitch-Off by February 21.

It wouldn’t be a TC event without highlighting the best startups in the business. Here’s how it will work. Ten founders will pitch on stage for five minutes, followed by a five-minute Q&A with an esteemed panel of VC judges. The top three will then proceed to the finals, pitching again but this time with a more intensive Q&A and a new panel of judges. The winner will receive a feature article on TechCrunch.com, one-year free subscription to ExtraCrunch and a free Founder Pass to TechCrunch Disrupt this fall.

Nervous to pitch on-stage in front of thousands? Fear not. After completing the application, selected founders will receive several training sessions during a remote mini-bootcamp, communication training and personalized pitch-coaching by the Startup Battlefield team. Selected startups will also be announced on TechCrunch.com in advance of the show. 

What does it take to qualify? TechCrunch is looking for early-stage, pre-Series-A companies with limited press. The Early Stage Pitch-Off is open to companies from around the globe, consumer or enterprise and in any industry — biotech, space, mobility, impact, SaaS, hardware, sustainability and more.

Founders don’t miss your chance to pitch your company on the world’s best tech stage. Apply today!

Deep Science: AIs with high class and higher altitudes

There’s more AI news out there than anyone can possibly keep up with. But you can stay tolerably up to date on the most interesting developments with this column, which collects AI and machine learning advancements from around the world and explains why they might be important to tech, startups or civilization.

Before we get to the research in this edition, however, here’s a study from the ITIF trade group evaluating the relative positions of the U.S., EU and China in the AI “race.” I put race in quotes because no one knows where we’re going or how long the track is — though it’s still worth checking who’s in front every once in a while.

The answer this year is the U.S., which is ahead largely due to private investment from large tech firms and venture capital. China is catching up in terms of money and published papers but still lags far behind and takes a hit for relying on U.S. silicon and infrastructure.

The EU is operating at a smaller scale, and making smaller gains, especially in the area of AI-based startup funding. Part of that is no doubt the inflated valuations of U.S. companies, but the trend is clear — and perhaps an opportunity for investors is as well, who might see this as an opportunity to get in on some high-quality startups without needing quite so much capital.

The full report (PDF) goes into much more detail, of course, if you’re interested in a more granular breakdown of these numbers.

If the authors had known about this new Amazon-funded AI research center at USC they probably would have pointed at it as a good example of the type of partnership that helps keep U.S. production of AI scholars up.

A touch of class

On the farthest possible end from monetization and practical application, we have two interesting uses of machine learning in fields where human expertise is valued in different ways.

Diagram showing different modes of music as groups of dots in a 3D space.

Each color indicates a different mode style. Image Credits: EPFL

At Switzerland’s EPFL, some music-minded boffins at the Digital and Cognitive Musicology lab were investigating the shift in the use of modes in classical music over the ages — major, minor, other or none at all. In an effort to objectively categorize thousands of pieces from hundreds of years and composers, they created an unsupervised machine learning system to listen to and categorize the pieces according to mode. (Some of the data and methods are available on GitHub.)

“We already knew that in the Renaissance, for example, there were more than two modes. But for periods following the Classical era, the distinction between the modes blurs together. We wanted to see if we could nail down these differences more concretely,” he explained in a university news release.