If you parse that onto movie(X, Y, Z) and assuming you have the most basic of understanding about the world around you [I mean, be able to recognise that as a Prolog (-style) goal] you’ll end up with the right reference.
’cause of some, at face/title value, laudable effort, as here: Rules for AI. Like,
“According to the guidelines, trustworthy AI should be: (1) lawful – respecting all applicable laws and regulations; (2) ethical – respecting ethical principles and values; (3) robust – both from a technical perspective while taking into account its social environment.”
Rightey, you know any, I mean any human that could comply with thee ..!? Why require this, then, of AI systems? And it continues [my unitalics]:
“The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:
- Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches ‘Can be achieved’ – any other options, as the mentioned, will undo the benefits of having AI over human (on average) stupidity ..?
- Technical Robustness and safety: AI systems need to be resilient and secure. No non-trivial system built by humans to date, has ever achieved that. Now suddenly, it is required? How, so suddenly and not on old-school systems like European politics? They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. Well, we have safety requirements on all sorts of systems so when it’s not new, why mention it? That is the only way to ensure that also unintentional harm can be minimized and prevented. You hinting at intentional harm that is OK then, like weaponised AI against humans read European citizens that have only slightly different opinions?
- Privacy and data governance: besides ensuring full respect for privacy and date protection unlike the GDPR then that allows about anything from (whose!?) law enforcement agencies, right?, adequate data governance mechanisms must also be ensured also non-existent in the current-day world…, taking into account the quality and integrity of the data which is crappy and crappy, respectively, almost everywhere – why not fix that first, eh? How then and if you’d know, why not already?, and ensuring legitimised access to data. Oh yeah, see the prev comment on GDPR.
- Transparency: the data, system and AI business models should be transparent. So far, proven impossible, since AI systems have been found to be able to discover how to lie to the investigators… Traceability mechanisms can help achieving this. Partially, most partially. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Impossible as 50% of humans are underaverage IQ, and AI systems may use (far, very far) above-average-IQ-reasoning. Somewhere, dumbing down the explanation will not work. Humans need to be aware that they are interacting with an AI system nice, but humans can be misled easily on this one, too, and must be informed of the system’s capabilities and limitations. Of course! As they are now informed of the huge and severe limitations of, e.g., EU lawmakers. Dare a test …!? How to explain the convergence criteria of neural nets? One can mathematically prove there’s no mathematical proof of neural nets’ trainability/effectiveness [if you don’t get that, you’re out of the discussion] so ‘limitations’..?
- Diversity, non-discrimination and fairness: Unfair bias must be avoided at last, fair bias is allowed! since e.g., this, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Yeah, yeah, what does that have to do with AI systems if all those thing are so absolutely e-ve-ry-where today, and that is allowed to continue [don’t say that it isn’t; all of the EU’s government and -s would be in lawsuits against every personhood of the planet including themselves over how today’s world works (if at all)]. Fostering diversity objection! extremely illegal against human rights, to be so parti pris! See here, AI systems should be accessible to all oh yeah, like today, right ..?, regardless of any disability the rare True That in the sea of this nonsense-or-impossibility, and involve relevant stakeholders throughout their entire life circle. Again, like today. NOT, why all of a sudden for AI, then?
- Societal and environmental well-being: AI systems should benefit all human beings no just the stakeholders each to their own; banks don’t work for society but for the shareholders period, including future generations unlike climate policies that are just there for the current 1% of course. It must hence nope, non sequitur be ensured that they are sustainable and environmentally friendly first require that of you EMP travel schedule ..!!. Moreover, they should take into account the environment ditto, including other living beings like, maybe even humans? Do animals count, otr only the cuddly ones of those? Insects? Funghi?, and their social and societal impact should be carefully considered. Apply that to the EU first, implying immediate abandonment of the EU altogether…
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes hear hear; now, since no-one knows how to do that [no you liar! I mean consultant; your ‘methods’ are provably inadequate so bugger off with your extortionist hourly rates]. Auditability, which enables the assessment of algorithms same. Simply not true with AI, data and design processes plays a key role therein some role maybe, but there’s much more and these means are very weak in this, plus this, especially in critical applications. Raising the question of whether AI should be allowed into those, and how you test in today’s non-AI [which is …!?!?] systems Moreover, adequate an accessible redress should be ensured. Also centuries of backlog with that, fix that first…
Leaving not much, eh? Not even a nice try; try again. And:
[Now that‘s readable – with the same results as the above]