Owen Lavine – 8/13/23
Nestled on the southern peninsula of Monterey Bay is the Asilomar conference grounds, only a couple hours north of San Luis Obispo. The Asilomar conference grounds has hosted a variety of prestigious technologists, scientists and innovators — it’s the Independence Hall for scientists, who draft up their own constitutions of scientific ethics.
In 2017 Asilomar hosted a group of computer scientists, programmers and AI developers for the ‘Beneficial AI’ Conference put on by the Future of Life Institute (FLI) where 23 basic AI governance principles were agreed upon. Now, FLI and others are urging the U.S. government to place a 90-day halt on the development of AI, as they believe the Alisomar principles are not being upheld, according to a letter released by FLI on March 22.
Disagreements among those within the industry and outside about the ethics of AI are widespread – the same is true at Cal Poly.
Computer science senior Peter Marsh is working collaboratively with researchers at CSU Long Beach to create AI machine-learning algorithms that identify the location of sharks in images taken by underwater cameras.
Marsh believes his project is an example of how AI can be utilized for good, yet he still contends that AI could be used to the detriment of society.
“It’s a black box,” Marsh said. “We have created algorithms so complex that we can’t even begin to understand them.”
Marsh took philosophy professor Jacob Sparks’ Philosophy of Ethics, Science and Technology (PHIL 323) in fall 2022. The class is populated with and required for most STEM majors from across the STEM spectrum, so weekly discussions revolve around topics that cross disciplines.
Marsh remembers the week AI was first discussed in PHIL 323, with the resounding conclusion from him and his classmates being to “stop developing this stuff.”
Marsh believes that there is too much latency built into the government and that the U.S. isn’t well equipped to deal with the “quantum shifts” that happen every few months, therefore it’s easier to ban development outright.
Recent developments in AI technology such OpenAI’s chatbot, ChatGPT and widespread use of AI-assisted is among the many AI advancements that have been drawing special attention.
Alternatively, incoming department chair of Applied Computing Christopher Lupo contends that there are regulations the government could be putting in place instead of outright banning AI development – a move which could be dangerous.
Lupo believes that ‘bad actors’ would continue to develop AI in spite of the ‘good actors’ pausing AI development and then said ‘bad actors’ would use their advanced AI to attack the ‘good actors’.
“We don’t want to come behind and end up trying to scrape out of some potential cyber security attack,” Lupo said.
Lupo added that licensure laws for AI could potentially help control the negative impacts of AI; in the same way, a doctor needs a medical license – computer scientists would need an AI license. Lupo further added that tests could be created for AI products to test how they respond to election misinformation and disinformation. Tests would also track different AIs’ responses when prompted to generate racist, sexist and other forms of bigoted content.
IBM has recently developed multiple products to test how AI chatbots respond to bigoted content and concerning ways AI can be exploited. One of such tests finds how AI chatbots filter counterfactual information according to IBM.
Marsh said that in his experience most computer science students come into the class with an uncritical view of AI development.
“The vast majority is very pro-tech and pro-development of technology and it takes a little while of conversation before these ideas start coming out like ‘Holy shit, what if continuing to develop social media means we never see another human face-to-face again’,” Marsh said.
Sparks said his class is not structured to make students reach any particular conclusions but to rather get students thinking critically about ethical conundrums. PHIL 323 was not a requirement for computer science students until recently when the flowchart was updated to include it in 2020.
“Instead of the dialogue always being ‘Are you for it or against it?’, it should be ‘What are you trying to build and why?’,” Sparks added.
Sparks recently attended the International Conference on Computer Ethics where philosophers, programmers and others meet to discuss all things computer ethics-related. He said, ironically, a major takeaway from the conference was that discussions about AI ethics often lead to benefits for the major power players who control AI development.
“The obvious result of that letter [the FLI letter mentioned previously] being published is, is people pour more money into their investments in OpenAI and Microsoft and all the other companies,” Sparks said. “So raising the ethical concern often has the opposite effect.”