Faculty Spotlight
Jason Schultz
Professor of Clinical Law
Intellectual property and consumer protection issues in artificial intelligence.
In my Technology Law and Policy Clinic course, I work with my students on pro bono cutting-edge public interest technology projects for nonprofit, government, and individual clients who cannot otherwise afford legal representation. We work across a range of issues: privacy, security, consumer protection, intellectual property, civil liberties, etc. It’s wonderful in so many ways, especially engaging with students to tackle new areas of law and technology.
In conjunction with the Engelberg Center on Innovation Law & Policy and the Knowing Machines research project, I work on a number of law and policy efforts to ensure open access and consumer protection around machine learning and artificial intelligence. These range from filing amicus (“friend of the court”) briefs in key legal cases to submitting comments to local, state, federal, and international regulators on topics such as worker surveillance, the use of copyright works to train AI, and the unlawful appropriation of facial images to build facial recognition software. My work relates to the Alliance for Public Interest Technology in that the Alliance is trying to ensure that public interest values are part of how NYU approaches building and enhancing new technologies, including AI.
Alliance for Public Interest Technology launch event, February 6, 2020
Pamela Samuelson at UC Berkeley has been an amazing leader in the field of public interest technology law, often seeing issues years ahead of others, and laying much of the theoretical groundwork for how to intervene in the courts or Congress. I model much of my teaching on her approach.
Alan Turing, to see what he would think of where AI is today. I’d be curious if he thinks the current generation of AI tools come close to passing his test—whether a computer is capable of “thinking” like a human being—especially as he noted that to pass the test, machine intelligence would not always give the correct answers, since often humans do not know or give correct answers. This deception was part of human imitation, so when we see AI systems “hallucinate” one could surmise they are merely doing so because they are imitating human hallucinations about our own realities.
Probably at a startup working on responsible AI efforts.
The Alliance is trying to ensure that public interest values are part of how NYU approaches building and enhancing new technologies, including AI