Connect with us


Europe’s AI Act falls far short on protecting fundamental rights, civil society groups warn



Europe’s AI Act falls far short on protecting fundamental rights, civil society groups warn

Civil society has been poring over the detail of the European Commission’s proposal for a risk-based framework for regulating applications of artificial intelligence which was proposed by the EU’s executive back in April.

The verdict of over a hundred civil society organizations is that the draft legislation falls far short of protecting fundamental rights from AI-fuelled harms like scaled discrimination and blackbox bias — and they’ve published a call for major revisions.

“We specifically recognise that AI systems exacerbate structural imbalances of power, with harms often falling on the most marginalised in society. As such, this collective statement sets out the call of 11[5] civil society organisations towards an Artificial Intelligence Act that foregrounds fundamental rights,” they write, going on to identify nine “goals” (each with a variety of suggested revisions) in the full statement of recommendations.

The Commission, which drafted the legislation, billed the AI regulation as a framework for “trustworthy”, “human-centric” artificial intelligence. However it risks veering rather closer to an enabling framework for data-driven abuse, per the civil society groups’ analysis — given the lack of the essential checks and balances to actually prevent automated harms.

Today’s statement was drafted by European Digital Rights (EDRi), Access Now, Panoptykon Foundation,, AlgorithmWatch, European Disability Forum (EDF), Bits of Freedom, Fair Trials, PICUM, and ANEC — and has been signed by a full 115 not-for-profits from across Europe and beyond.

The advocacy groups are hoping their recommendations will be picked up by the European Parliament and Council as the co-legislators continue debating — and amending — the Artificial Intelligence Act (AIA) proposal ahead of any final text being adopted and applied across the EU.

Key suggestions from the civil society organizations include the need for the regulation to be amended to have a flexible, future-proofed approach to assessing AI-fuelled risks — meaning it would allow for updates to the list of use-cases that are considered unacceptable (and therefore prohibited) and those that the regulation merely limits, as well as the ability to expand the (currently fixed) list of so-called “high risk” uses.

The Commission’s proposal to categorizing AI risks is too “rigid” and poorly designed (the groups’ statement literally calls it “dysfunctional”) to keep pace with fast-developing, iterating AI technologies and changing use cases for data-driven technologies, in the NGOs’ view.

“This approach of ex ante designating AI systems to different risk categories does not consider that the level of risk also depends on the context in which a system is deployed and cannot be fully determined in advance,” they write. “Further, whilst the AIA includes a mechanism by which the list of ‘high-risk’ AI systems can be updated, it provides no scope for updating ‘unacceptable’ (Art. 5) and limited risk (Art. 52) lists.

“In addition, although Annex III can be updated to add new systems to the list of high-risk AI systems, systems can only be added within the scope of the existing eight area headings. Those headings cannot currently be modified within the framework of the AIA. These rigid aspects of the framework undermine the lasting relevance of the AIA, and in particular its capacity to respond to future developments and emerging risks for fundamental rights.”

They have also called out the Commission for a lack of ambition in framing prohibited use-cases of AI — urging a “full ban” on all social scoring scoring systems; on all remote biometric identification in publicly accessible spaces (not just narrow limits on how law enforcement can use the tech); on all emotion recognition systems; on all discriminatory biometric categorisation; on all AI physiognomy; on all systems used to predict future criminal activity; and on all systems to profile and risk-assess in a migration context — arguing for prohibitions “on all AI systems posing an unacceptable risk to fundamental rights”.

On this the groups’ recommendations echo earlier calls for the regulation to go further and fully prohibit remote biometric surveillance — including from the EU’s data protection supervisor.

The civil society groups also want regulatory obligations to apply to users of high risk AI systems, not just providers (developers) — calling for a mandatory obligation on users to conduct and publish a fundamental rights impact assessment to ensure accountability around risks cannot be circumvented by the regulation’s predominant focus on providers.

After all, an AI technology that’s developed for one ostensible purpose could be applied for a different use-case that raises distinct rights risks.

Hence they want explicit obligations on users of “high risk” AIs to publish impact assessments — which they say should cover potential impacts on people, fundamental rights, the environment and the broader public interest.

“While some of the risk posed by the systems listed in Annex III comes from how they are designed, significant risks stem from how they are used. This means that providers cannot comprehensively assess the full potential impact of a high-risk AI system during the conformity assessment, and therefore that users must have obligations to uphold fundamental rights as well,” they urge.

They also argue for transparency requirements to be extended to users of high risks systems — suggesting they should have to register the specific use of an AI system in a public database the regulation proposes to establish for providers of such system.

“The EU database for stand-alone high-risk AI systems (Art. 60) provides a promising opportunity for increasing the transparency of AI systems vis-à-vis impacted individuals and civil society, and could greatly facilitate public interest research. However, the database currently only contains information on high-risk systems registered by providers, without information on the context of use,” they write, warning: “This loophole undermines the purpose of the database, as it will prevent the public from finding out where, by whom and for what purpose(s) high-risk AI systems are actually used.”

Another recommendations addresses a key civil society criticism of the proposed framework — that it does not offer individuals rights and avenues for redress when they are negatively impacted by AI.

This marks a striking departure from existing EU data protection law — which confers a suite of rights on people attached to their personal data and — at least on paper — allows them to seek redress for breaches, as well as for third parties to seek redress on individuals’ behalf. (Moreover, the General Data Protection Regulation includes provisions related to automated processing of personal data; with Article 22 giving people subject to decisions with a legal or similar effect which are based solely on automation a right to information about the processing; and/or to request a human review or challenge the decision.)

The lack of “meaning rights and redress” for people impacted by AI systems represents a gaping hole in the framework’s ability to guard against high risk automation scaling harms, the groups argue.

“The AIA currently does not confer individual rights to people impacted by AI systems, nor does it contain any provision for individual or collective redress, or a mechanism by which people or civil society can participate in the investigatory process of high-risk AI systems. As such, the AIA does not fully address the myriad harms that arise from the opacity, complexity, scale and power imbalance in which AI systems are deployed,” they warn.

They are recommending the legislated is amended to include two individual rights as a basis for judicial remedies — namely:

  • (a) The right not to be subject to AI systems that pose an unacceptable risk or do not comply with the Act; and
  • (b) The right to be provided with a clear and intelligible explanation, in a manner that is accessible for persons with disabilities, for decisions taken with the assistance of systems within the scope of the AIA;

They also suggest a right to an “effective remedy” for those whose rights are infringed “as a result of the putting into service of an AI system”. And, as you might expect, the civil society organizations want a mechanism for public interest groups such as themselves to be able to lodge a complaint with national supervisory authorities for a breach or in relation to AI systems that undermine fundamental rights or the public interest — which they specify should trigger an investigation. (GDPR complaints simply being ignored by oversight bodies is a major problem with effective enforcement of that regime.)

Other recommendations in the groups’ statement include the need for accessibility to be considered throughout the AI system’s lifecycle, and they call out the lack of accessibility requirements in the regulation — warning that risks leading to the development and use of AI with “further barriers for persons with disabilities”; they also want explicit limits to ensure that harmonized product safety standards which the regulation proposes to delegate to private standards bodies should only cover “genuinely technical” aspects of high risks AI systems (so that political and fundamental rights decisions “remain firmly within the democratic scrutiny of EU legislators”, as they put it); and they want requirements on AI system users and providers to apply not only when the outputs are applied within the EU but also elsewhere — “to avoid risk of discrimination, surveillance, and abuse through technologies developed in the EU”.

Sustainability and environmental protection has also been overlooked, per the groups’ assessment.

On that they’re calling for “horizontal, public-facing transparency requirements on the resource consumption and greenhouse gas emission impacts of AI systems” — regardless of risk level; and covering AI system design, data management and training, application, and underlying infrastructures (hardware, data centres, etc.

The European Commission frequently justifies its aim of encouraging the update of AI by touting automation as a key technology for enabling the bloc’s sought for transition to a “climate-neutral” continent by 2050 — however AI’s own energy and resource consumption is a much overlooked component of these so-called ‘smart’ systems. Without robust environmental auditing requirements also applying to AI it’s simply PR to claim that AI will provide the answer to climate change.

The Commission has been contacted for a response to the civil society recommendations.

Last month, MEPs in the European Parliament voted to back a total ban on remote biometric surveillance technologies such as facial recognition, a ban on the use of private facial recognition databases and a ban on predictive policing based on behavioural data.

They also voted for a ban on social scoring systems which seek to rate the trustworthiness of citizens based on their behaviour or personality, and for a ban on AI assisting judicial decisions — another highly controversial area where automation is already been applied.

So MEPs are likely to take careful note of the civil society recommendations as they work on amendments to the AI Act.

In parallel the Council is in the process of determining its negotiating mandate on the regulation — and current proposals are pushing for a ban on social scoring by private companies but seeking carve outs for R&D and national security uses of AI.

Discussions between the Commission, Parliament and Council will determine the final shape of the regulation, although the parliament must also approve the final text of the regulation in a plenary vote — so MEPs’ views will play a key role.

Source: Tech


Paack pulls in a $225M Series D led by SoftBank to scale its E-commerce delivery platform



By now, many of us are familiar with the warehouse robots which populate those vast spaces occupied by the likes of Amazon and others. In particular, Amazon was very much a pioneer of the technology. But it’s 2021 now, and allying warehouse robots with a software logistics platform is no longer the monopoly of one company.

One late-stage startup which has been ‘making hay’ with the whole idea is Paack, an e-commerce delivery platform which a sophisticated software platform that integrates with the robotics which are essential to modern-day logistics operations.

It’s now raised €200m ($225m) in a Series D funding round led by SoftBank Vision Fund 2. The capital will be used for product development and European expansion.

New participants for this round also include Infravia Capital Partners, First Bridge Ventures, and Endeavor Catalyst. Returning investors include Unbound, Kibo Ventures, Big Sur Ventures, RPS Ventures, Fuse Partners, Rider Global, Castel Capital, and Iñaki Berenguer.

This funding round comes after the creation of a profitable position in its home market of Spain, but Paack claims it’s on track to achieve similar across its European operations, Such as in the UK, France, and Portugal.

Founded by Fernando Benito, Xavier Rosales and Suraj Shirvankar, Paack now says it’s delivering several million orders per month from 150 international clients, processing 10,000 parcels per hour, per site. Some 17 of them are amongst the largest e-commerce retailers in Spain.

The startup’s systems integrate with e-commerce sites. This means consumers are able to customize their delivery schedule at checkout, says the company.

Benito, CEO and Co-founder, said: “Demand for convenient, timely, and more sustainable methods of delivery is going to explode over the next few years and Paack is providing the solution. We use technology to provide consumers with control and choice over their deliveries, and reduce the carbon footprint of our distribution.” 

Max Ohrstrand, Investment Director at SoftBank Investment Advisers said: “As the e-commerce sector continues to flourish and same-day delivery is increasingly the norm for consumers, we believe Paack is well-positioned to become the category leader both in terms of its technology and commitment to sustainability.”

According to research from the World Economic Forum (WEF), the last-mile delivery business is expected to grow 78% by 2030, causing a rise in CO2 emissions of nearly one-third.

As a result, Paack claim it aims to deliver all parcels at carbon net-zero by measuring its environmental impact, using electric last-mile delivery vehicles. It is now seeking certification with The Carbon Trust and United Nations.

In an interview Benito told me: “We have a very clear short term vision which is to lead sustainable e-commerce delivers in Europe… through technology via what we think is perhaps the most advanced tech delivery platform for last-mile delivery. Our CTO was the CTO and co-founder of Google Cloud, for instance.”

“We are developing everything from warehouse automation, time windows, routing integrations etc. in order to achieve the best delivery experience.”

Paack says it is able to work with more than one robotics partner, but presently it is using robots from Chinese firm GEEK.

The company hopes it can compete with the likes of DHL, Instabox, and La Poste in Europe, which are large incumbents.

Source: Tech

Continue Reading


Infermedica raises $30M to expand its AI-based medical guidance platform



Infermedica, a Poland-founded digital health company that offers AI-powered solutions for symptom analysis and patient triage, has raised $30 million in Series B funding. The round was led by One Peak and included participation from previous investors Karma Ventures, European Bank for Reconstruction and Development, Heal Capital and Inovo Venture Partners. The new capital means the startup has raised $45 million in total to date.

Founded in 2012, Infermedica aims to make it easier for doctors to pre-diagnose, triage and direct their patients to appropriate medical services. The company’s mission is to make primary care more accessible and affordable by introducing automation into healthcare. Infermedica has created a B2B platform for health systems, payers and providers that automates patient triage, the intake process and follow-up after a visit. Since its launch, Infermedica is being used in more than 30 countries in 19 languages and has completed more than 10 million health checks.

The company offers a preliminary diagnosis symptom checker, an AI-driven software that supports call operators making timely triage recommendations and an application programming interface that allows users to build customized diagnostic solutions from scratch. Like a plethora of competitors, such as Ada Health and Babylon, Infermedica combines the expertise of physicians with its own algorithms to offer symptom triage and patient advice.

In terms of the new funding, Infermedica CEO Piotr Orzechowski told TechCrunch in an email that the investment will be used to further develop the company’s Medical Guidance Platform and add new modules to cover the full primary care journey. Last year, Infermedica’s team grew by 80% to 180 specialists, including physicians, data scientists and engineers. Orzechowski says Infermedica has an ambitious plan to nearly double its team in the next 12 months.

Image Credits: Infermedica

“We will invest heavily into our people and our products, rolling out new modules of our platform as well as expanding our underlying AI capabilities in terms of disease coverage and accuracy,” Orzechowski said. “From the commercial perspective, our goal is to strengthen our position in the US and DACH and we will focus the majority of our sales and marketing efforts there.”

Regarding the future, Orzechowski said he’s a firm believer that there will be fully automated self-care bots in 5-10 years that will be available 24/7 to help providers find solutions to low acuity health concerns, such as a cold or UTI.

“According to WHO, by 2030 we might see a shortage of almost 10 million doctors, nurses and midwives globally,” Orzechowski said. “Having certain constraints on how fast we can train healthcare professionals, our long-term plan assumes that AI will become a core element of every modern healthcare system by navigating patients and automating mundane tasks, saving the precious time of clinical staff and supporting them with clinically accurate technology.”

Infermedica’s Series B round follows its $10 million Series A investment announced in August 2020. The round was led by the European Bank for Reconstruction and Development (EBRD) and digital health fund Heal Capital. Existing investors Karma Ventures, Inovo Venture Partners and Dreamit Ventures also participated in the round.

Source: Tech

Continue Reading


KKR invests $45M into GrowSari, a B2B platform for Filipino MSMEs



A sari-sari store owner who uses GrowSari

GrowSari, the Manila-based startup that helps small shops grow and digitize, announced today that KKR will lead its Series C round with a $45 million investment. The funds will be used to enter new regions in the Philippines and expand its financial products. The Series C round is still ongoing and the startup says it is already oversubscribed, with the final composition currently being finalized. 

Before its Series C, GrowSari’s total raised was $30 million. TechCrunch last wrote about GrowSari in June 2021, when it announced its Series B. Since then, it has expanded the number of municipalities it serves from 100 to 220, and now has a customer base of 100,000 micro, small and mid-sized enterprise (MSME) store owners. 

Founded in 2016, GrowSari is a B2B platform that offers almost every kind of service that small- to medium-sized retailers, including neighborhood stores that carry daily necessities (called sari-saris), roadside and market shops and pharmacies, need.

For example, it has a wholesale marketplace with products from major fast-moving consumer goods (FMCG) brands like Unilever, P&G and Nestle. It partners with over 200 providers, like telecoms, fintechs and subscription plans, so sari-saris can offer services like top-ups and bill payments to their customers. 

Sari-sari operators can also use GrowSari to launch e-commerce stores and access short-term working capital loans to buy inventory. The startup’s other financial products include digital wallets and cash-in services, and it is looking at adding remittance, insurance and loans in partnership with other providers. 

The new funding will be used to expand into the Visayas and Mindanao, the two other main geographical regions in the Philippines, with the goal of covering all 1.1 million “mom and pop” stores in the Philippines. 

Source: Tech

Continue Reading