AI Risk and the Law of AGI

Posted by NeolibsLoveBeans

2 Comments

  1. NeolibsLoveBeans on

    This is the funniest and most interesting idea I’ve read in a long time

    > We argue that a surprising set of legal interventions could go a long way toward preventing powerful AGI systems from causing large-scale harm: Law could grant such systems the legal rights to make contracts, hold property, and bring certain basic tort claims. This may sound like a radical proposal. But it is in some sense quite familiar. Law already extends such rights to other kinds of powerful, agentic, and nonhuman entities—like corporations and nation-states. Granting these basic private law rights could reduce AGI risk for the same reason it reduces risk in domestic economies and international relations. Namely, these rights—and contract rights especially—offer a means by which entities with conflicting goals can pursue their divergent ends without incurring the high costs of violent conflict.

    > What kinds of credible agreements between humans and AIs could AI contract rights enable, then? The same ones they enable between humans and other humans: ordinary bargains to exchange goods and services. Humans might, for example, promise to give AIs some amount of computing power with which AIs could pursue their own goals. AIs, in turn, might agree to give humans the cure to some deadly cancer. And so on.

  2. AnachronisticPenguin on

    Discussion of AGI and AI with independent self interest will is very silly at the moment.

    From the trends we have seen it seems likely at this point we will get superintelligence without sentient AGI.

Leave A Reply