artificial intelligence – UofL News Fri, 17 Apr 2026 17:45:05 +0000 en-US hourly 1 Q&A: UofL AI safety expert says artificial superintelligence could harm humanity /section/science-and-tech/qa-uofl-ai-safety-expert-says-artificial-superintelligence-could-harm-humanity/ Mon, 15 Jul 2024 13:46:49 +0000 /?p=60943 Roman Yampolskiy knows a thing or two about artificial intelligence (AI). A University of Louisville associate professor of computer science, he conducts research into futuristic AI systems, superintelligent systems and general AI. Yampolskiy coined the term “AI safety” in a 2011 publication and was one of the first computer scientists to formally research the field of AI safety, which is focused on preventing harmful actions by AI systems. He is listed among the top 2% of cited researchers in the world.

Technology companies are racing to develop artificial general intelligence – systems that can learn, respond and apply knowledge at levels comparable to humans in most domains – or even superintelligence, systems that far exceed human capabilities in a wide range of tasks. They hope these systems will offer immense potential to solve human health problems, solve enduring social issues or relieve human workers of mundane tasks. AI experts surveyed estimated that artificial general intelligence (AGI) or superintelligence is likely to be reality within .

Yampolskiy has concerns about this powerful technology, however. His research indicates that these systems cannot be controlled, leaving a high probability that a superintelligent AI system could do immense harm to its human creators, whether of its own volition, through a coding mistake or under malicious direction. It might develop a pathogen that could wipe out the human population or launch a nuclear war, for example. Without a mechanism to control these systems, Yampolskiy believes AI has a for the human race. For this reason, he strongly advocates that development of the technology should be slowed or suspended until AI safety can be assured and controls established.

Yampolskiy recently published a book, “,” in which he explains why he believes it is unlikely we will be able to control such systems.

UofL News sat down with Yampolskiy to learn more about his concerns and what might prevent an AI catastrophe.

UofL News: What led you to research AI safety?

Roman Yampolskiy: My PhD [2008] was on security for online poker. At the time, bots were a common nuisance for online casinos, so I developed some algorithms to detect bots to prevent them from participating. But then I realized, they are only going to get better. They are going to get much more capable, and this area will be very important once we start seeing real progress in AI.

We were looking at things 12 or 13 years ago that people are just now proposing. It was science fiction at the time. There was no funding, no journals and no conferences on this stuff.

UofL News: What are your concerns with the development of advanced AI?

۲DZ쾱:Historically, AI was a tool, like any other technology. Whether it was good or bad was up to the user of that tool. You can use a hammer to build a house or kill someone. The hammer is not in any way making decisions about it.

With advanced AI, we are switching the paradigm from tools to agents. The software becomes capable of making its own decisions, working independently, learning, self-improving, modifying. How do we stay in control? How do we make sure the tool doesn’t become an agent that does something we don’t agree with or don’t support? Maybe something against us. Maybe something we cannot undo because it is so impactful in the world, controlling nuclear plants, space flight or military applications. Once you deploy those systems, there is no undoing that. How will we guarantee that no matter how capable those systems become, how independent, we still have a say in what happens to us and for us?

I don’t think it’s possible to indefinitely control superintelligence. By definition, it’s smarter than you. It learns faster, it acts faster, it will change faster. You will have malevolent actors modifying it. We have no precedent of lower capability agents indefinitely staying in charge of more capable agents.

Until some company or scientist says ‘Here’s the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,’ I don’t think we should be developing those general superintelligences.

We can get most of the benefits we want from narrow AI, systems designed for specific tasks: develop a drug, drive a car. They don’t have to be smarter than the smartest of us combined.

Roman Yampolskiy, associate professor of computer science, is calling for a pause in the development of artificial superintelligence until we know the systems can be controlled. Photo by Ashly Cecil.
Roman Yampolskiy, associate professor of computer science, is calling for a pause in the development of artificial superintelligence until we know the systems can be controlled. Photo by Ashly Cecil.

UofL News: What harmful outcomes could result from artificial general superintelligence?

۲DZ쾱:There are three different types of risks. One type is existential risk where everyone dies.

Somewhat worse is suffering risks where everyone wishes they were dead.

Somewhat “nicer” is risk – where you have no meaning. You have nothing to contribute to superintelligence: you are not a better mathematician, not a better philosopher, not a better poet. Your life is kind of pointless. For many people, their creative output is the meaning they derive in this world. So, we will have a strong paradigm shift in terms of leisure time and society as a whole. That is the best outcome of three.

UofL News: What is the best-case scenario?

۲DZ쾱:I’m wrong! I’m completely wrong. It’s actually possible to control it, we figure it out in time and we have this utopia-like future where the biggest problem is figuring out what to do with all our wealth and health and spare time.

UofL News: Do a lot of other AI experts agree that action is needed?

۲DZ쾱:We had signed by thousands of scientists saying we think this is as dangerous as nuclear weapons and we need to have government regulation. And not just quantity but quality – top scientists, Nobel prize winners – all coming on board agreeing with our message and have signed a should be a global priority.

UofL News: What should we do to prevent these negative outcomes?

۲DZ쾱:There is a lot of research on some aspects of impossibility results – showing that it is impossible to explain, predict and control the systems. There is a lot of research in trying to understand how large neural networks function. I support it fully; there should be more of that.

As individuals, you can vote for politicians who are knowledgeable about such things. We can have more scientists and engineers in office.

If you don’t engage with this technology, you don’t provide free training and labelling data for it. If you don’t pay subscription services, you don’t give them money to buy more compute. They are less likely to be able to raise funding as quickly. You are buying us time. If you insist on pointless government red tape and regulation, it slows them down. It takes money from their computing budget into the legal budget. Now they have to deal with this meaningless government regulation, which is usually undesirable, but here I strongly encourage it.

UofL News: What do you think about instructors and students using ChatGPT, Bing or other generative AI technologies?

۲DZ쾱:Previous answer notwithstanding, if you don’t embrace the use of existing generative AI, you are going to be obsolete. You are competing with people who do have knowledge and ability to use those tools and you will not be competitive, so you really have no choice. You can be Amish-like, but that’s not what college is all about.

Roman Yampolskiy, associate professor of computer science. Photo by Ashly Cecil.
Roman Yampolskiy, associate professor of computer science. Photo by Ashly Cecil.

UofL News: What are you working on now to improve AI safety and make it possible to control these systems?

۲DZ쾱:Continuing with impossibility results. We have many tools we would need to try to control the systems, so understanding what tools would be accessible to us is what we hope for. I am trying to understand even in theory to what degree each tool is accessible. We worry about testing, for example. Can you successfully test general intelligence?

UofL News: What else should people know about the issue of uncontrollable artificial intelligence?

۲DZ쾱:We haven’t lost until we have lost. We still have a great chance to do it right and we can have a great future. We can use narrow AI tools to cure aging, an important problem and I think we are close on that front. Free labor, physical and cognitive, will give us a lot of economic wealth to do better in many areas of society which we are struggling with today.

People should try to understand the unpredictable consequences and existential risks of bringing AGI or superintelligent AI into the real world. Eight billion people are part of this experiment they never consented to – not just that they have not consented, they cannot give meaningful consent because nobody understands what they are consenting to. It’s not explainable, it’s not predictable, so by definition, it’s an unethical experiment on all of us.

So, we should put some pressure on people who are irresponsibly moving too quickly on AI capabilities development to slow down, to stop, to look in the other direction, to allow us to only develop AI systems we will not regret creating.

 

]]>
pAInt: UofL professor explores blurred lines between art and technology /post/uofltoday/paint-uofl-professor-explores-blurred-lines-between-art-and-technology/ Fri, 21 Jun 2024 15:32:54 +0000 /?p=60271 They say seeing is believing. But when most of what wesee is filtered through screens and algorithms, it’s hard tobe sure. Is that selfie touched up? And was that viral videoreal or made with artificial intelligence?

The impact of technology on how we experience theworld creates both new possibilities and a host ofpractical and ethical questions. But Tiffany Calvert, anassociate professor in UofL’s Hite Institute of Art + Design,is looking for answers — and to find them, she’s goingstraight to the source.

In her “Machine Vision Series,” Calvert partners withher own virtual apprentice, a bot trained to paint as hercollaborator. Calvert believes working with AI can help usunderstand its implications and explore the blurring linebetween what we see and what’s real.

“I often get asked, ‘is AI your collaborator or your antagonist?’ ” said Calvert, one of many at UofL exploring the world through creativity. “The answer is that it’s complicated. I’m working with AI in a way that both criticizes its vulnerabilities and has a healthy appreciation of what it can do.”

ART DOTCOM

Cutting her artistic teeth at the height of the ‘90s Dotcom bubble, Calvert has long been fascinated with the intersection of art and technology. Then, traditional forms of visual expression were converging with new digital tools for photo-editing and design.

Calvert cakes on thick layers of paint to differentiate herself from her bot collaborator.

“There was something exciting about that convergence and the fact that I could use these tools to build something creative,” she said. In a way, Calvert saw technology as a medium similar to charcoals or paint. But as technology has advanced, now capable of its own analysis and decision-making, it’s become more of an artistic partner.

For her Machine Vision Series,” Calvert trained her AI collaborator by feeding it more than 1,000 historical still life paintings of tulips in bloom. It’s a technique known as machine learning, where a computer is shown examples to learn what something looks like — be it cars, crosswalks or frescos.

After a while, the AI could recognize the tulips and begin to ‘paint’ its own. Calvert would paint, then the computer, then Calvert again, caking on thick, colorful globs of oil pigment to differentiate herself from the machine.

The partnership might seem counterintuitive. Art, after all, is built on humanity and meaningful imperfection, but you’d expect a computer algorithm — something literally built on logic — to produce only the predictable and perfect.

But when the AI painted, it wasnt perfect. The algorithm can only interpret based on what its seen before, and sometimes, it misinterpreted or made logical leaps. Some AI-generated tulips were distorted in interesting and unpredictable ways like confusing the bulb of a flower with, say, an oyster or halved peach.

“Those distortions behave like a mutating virus,” Calvert said. “It’s interesting, because while it’s incredible that the technology can generate beautiful imagery, those misinterpretations reveal the underlying humanity in the code, and the biases inherent in datasets.”

THE HUMANITY

While flowers that look like peaches might seem like a problem, for Calvert, it’s a good thing. Artists are much more interested problems than answers.

Thats where the interesting stuff happens,” she said. These problems allow me to explore larger issues. How is this a metaphor for technology infecting our world and what precedents are out there?”

AI can be a powerful tool, she said, but it’s only as good as its human creators and users — who aren’t always clear, make mistakes and sometimes behave irresponsibly, irrationally or maliciously.

Tiffany Calvert paints tulip blossoms in her Louisville studio.

“The technology is obviously only as good as the information we give it, how we program it and how we use it,” she said. “That’s the underlying paradox, the humanity in the machine.”

Take the technology that created the tulips in Calvert’s paintings. Those specific tulips are the result of creative farming — a plant virus that boomed during the 17th century Dutch Golden Age, creating an explosion of new and unique tulip colors and variants.

That virus underpinned Tulip Mania, the first speculative bubble of the modern era, where the flowers were as much an investment and status symbol as decoration. Dutch consumers might have purchased a tulip bulb for more than the average salary.

“When Tulip Mania happened, the technology got way out of control from both an economic perspective and a biological one, where it’s now a problem for farmers,” Calvert said. “So humans, in their hubris, didn’t understand the destruction they’ve created.”

That’s why, Calvert said, it’s important to take a critical eye to technology and understand its implications. For example, with AI technology readily available and the content it creates surging across the internet, a recent Forbes survey shows some 75% of consumers worry AI will be used for misinformation.

“It’s interesting to explore, because AI is both really critical to solving important problems and at the same time, it depends on who programs and uses it,” she said. “Painting has always adopted and responded to new technologies, as a ways of examining our perception of the world.”

]]>
UofL researchers develop AI-powered tool to diagnose autism earlier /post/uofltoday/uofl-researchers-develop-ai-powered-tool-to-diagnosis-autism-earlier/ Mon, 19 Feb 2024 11:00:11 +0000 /?p=60085 University of Louisville researchers have developed a new AI-powered tool that could help doctors diagnose autism at a younger age.

Autism is a spectrum of developmental disabilities impacting social skills, language processing, cognition and other functions. The UofL tool has been shown to be 98.5% accurate in diagnosing kids as young as two, which could give doctors more time to intervene with potentially life-changing therapy. Their results were published in the journal .

“Therapy could be the difference between an individual needing full-time care and being independent, holding a job and living a fulfilled life,” said Ayman El-Baz, a co-inventor and professor and chair in the . He developed the technology with Gregory Barnes and Manuel Casanova of the UofL .

shows therapy can have the most impact if done in early childhood, when the brain is more elastic. However, currently, and even fewer are diagnosed by age eight. The problem, the researchers say, is one of supply and demand — there are too many patients and too few specialists to conduct the interviews and examinations needed for diagnosis.

“As a result, there’s an urgent need for a new, objective technology that can help us diagnose kids early,” said Barnes, a professor of neurology and executive director of the . “We think our tool can help fill that need, while providing more objectivity over the current interview method.”

With the UofL technology, AI can make the initial diagnosis, which researchers think could reduce specialist workload by as much as 30%. The specialist would meet later with the patient to confirm the diagnosis and talk about next steps.

The UofL technology works by using AI to analyze magnetic resonance imaging (MRI) scans for differences and abnormal connections that may indicate autism. Tested against scans of 226 children between the ages of 24 and 48 months, the technology was able to identify the 120-some children with autism with near perfect accuracy.

By looking at the physical structures of the brain rather than using interviews, researchers believe they can make diagnoses more objective and target the specific parts of the brain that may benefit most from therapy.

“The idea is that by drawing from both medicine and engineering, we can come up with a better solution that improves lives,” said Mohamed Khudri, a undergraduate student and author on the paper.

The diagnostic technology and intellectual property received support through . That includes the office’s suite of innovation programs, aimed at developing research-backed inventions for market, including the prestigious national Innovation Corps (I-Corps) program through the National Science Foundation. UofL is one of only a handful of universities nationwide to have each of these programs — and it’s the only one to have them all.

]]>
UofL developing AI model to improve outcomes in heart surgery /section/science-and-tech/uofl-developing-ai-model-to-improve-outcomes-in-heart-surgery/ Tue, 23 Jan 2024 16:00:21 +0000 /?p=59956 As artificial intelligence continues to evolve the medical field, UofL is investigating how AI could help improve patient outcomes during heart surgery.

A $750,000 grant from the American Heart Association will allow researchers to advance AI specifically for acute kidney injury and complications during or following cardiac surgery.

Acute kidney injury can result in increased mortality or persistent kidney dysfunction and, because it has a wide variety of contributing factors from patient-specific conditions to procedure complexity, this issue can be difficult for physicians to predict and prevent.

The project is a joint effort between UofL researchers from the , , the , and researchers at , and .

The team will innovate machine-learning AI models to analyze detailed, clinical patient data and develop a personalized risk prediction and decision-making process for managing kidney injury in heart surgery patients. They then will validate the process using independent databases and clinical trials at UofL Health.

Jiapeng Huang, professor and vice chair of the anesthesiology and perioperative medicine department
Jiapeng Huang, professor and vice chair of the anesthesiology and perioperative medicine department

UofL’s Jiapeng Huang, professor and vice chair of the anesthesiology and perioperative medicine department, is principal investigator for the project. As a cardiac anesthesiologist at UofL Health, he also sees numerous patients who deal with acute kidney injury.

“Our goal is to use AI and machine learning methodology to do two things. One, to predict in real time when the patient might develop acute kidney injury or if the patient will be at risk for acute kidney injury,” he said. “The second thing is to develop a clinical decision-support system to help the clinicians do the right thing for the patients at the right time to reduce chance of acute kidney injury after heart surgery.”

While Huang and UofL faculty member Bert Little focus on the clinical procedures and decision-making process, Lihui Bai, professor of industrial engineering at the Speed School, Xiaoyu Chen, assistant professor of industrial and systems engineering at SUNY Buffalo and George (Guanghui) Lan, professor of industrial and systems engineering at Georgia Institute of Technology, will work with a team of engineers to build the AI technology. The tech will allow physicians to use patients’ clinical information before, during and after surgery to inform physicians of the best sequence of treatment for patients to reduce the chance of kidney injury after heart surgery.

For the last 10 years, AI has been used in the medical field to analyze large health care data. AI can more easily recognize patterns than the human eye or brain, according to Huang, and can be a significant benefit to patient outcomes.

“This is one of those research (projects) that will benefit patients directly,” he said “Acute kidney injury happens in about 25% of patients after cardiac surgery. This study aims to protect patients from acute kidney injury after heart surgery.”

The three-year project, which is currently in phase 1, began in July 2023. During this early phase, the team is establishing the database and prediction model. In year three, clinical trials conducted at UofL Health will be used to determine whether the predictive modeling and clinical decision support system will reduce the rate of acute kidney injury after cardiac surgery.

UofL Health is an excellent partner for this project as it is one of the premier cardiac programs in the nation, according to Huang. It was responsible for the first heart transplant in the state of Kentucky, as well as many innovations in artificial heart pumps. UofL Health cardiovascular surgeon Siddharth Pahwa and cardiologist Dinesh Kalra, for example, are involved in other studies, including cardiac imaging and data collection in addition to patient care.

“UofL Health always focuses on improving patient safety and outcomes,” Huang said. “UofL faculty and researchers are perfect partners to perform clinical studies to advance our knowledge and benefit our patients at UofL Health.”

]]>
UofL law professor developing generative AI toolkit to aid legal writing instruction /section/science-and-tech/uofl-law-professor-developing-generative-ai-toolkit-to-aid-legal-writing-instruction/ Tue, 14 Nov 2023 11:00:27 +0000 /?p=59590 While many are wary of artificial intelligence and its feared effect of supplanting the human creation of content, one University of Louisville professor is leading an effort to help her colleagues use it in the classroom.

, assistant professor of law at UofL’s, has won a teaching grant from theto develop a toolkit that law professors anywhere can use to incorporate generative artificial intelligence (genAI) into their legal writing curricula.

GenAI is technology that can create text, images, videos and other media in response to prompts inputted by a user – otherwise known as a human being. Of the various types of genAI software currently available, ChatGPT is probably the best known.

Over the next year, Tanner and her team will design, develop and test resources that will become open-source materials for use in teaching legal writing and other law subjects. As the word infers, “open-source” means the materials will be open to anyone, free of charge.

Tanner wants the legal community – particularly those, like her, who teach legal writing – to accept that genAI is becoming part of the teaching environment, and having resources that enable an instructor to use it is key to making it work effectively in the classroom.

“Generative AI will change the way we teach. Some professors worry that a sea change is on the horizon – that we will not be able to assess student learning the way we did pre-ChatGPT,” she said. “Undoubtedly, we will have to adapt. And though generative AI will challenge the way we teach, there is also significant potential for innovation.”

The toolkit will help curious teachers without much prior preparation in genAI to develop knowledge and skills that will help them to embrace it in a way that enhances rather than deteriorates their sense of competency.“A law professor who teaches legal writing will be able to use the toolkit to continue developing their teaching identity rather than be threatened by the increased tempo of technological change,” Tanner said.

“We intend to show instructors how to frame teaching objectives that either work around or embrace generative AI, giving them a framework that is adaptable to evolving technologies. We also will provide examples of how to align teaching objectives with student outcomes.”

The toolkit also will enable those who use it to customize their use of genAI. “We do not intend for this to be a prescriptive approach to legal writing instruction nor one-size-fits-all writing assignments. Instead, it will focus on principles that each professor could adapt for their own purposes.”

Working with Tanner on the project are Tracy Norton, professor of law, and William Monroe, assistant director for instructional technology, of the Paul M. Hebert Law Center at Louisiana State University.

The toolkit is expected to launch in fall 2024.

]]>
Sen. Mitch McConnell visits UofL to announce $20 million in federal funding for cybersecurity workforce training /post/uofltoday/sen-mitch-mcconnell-visits-uofl-to-announce-20-million-in-federal-funding-for-cybersecurity-workforce-training/ Thu, 19 Jan 2023 17:40:50 +0000 /?p=57925 Senate Republican Leader Mitch McConnell announced today that $20 million in new federal funding soon will be available for training cybersecurity professionals through programs such as the successful Cybersecurity Workforce Certificate developed and piloted at UofL.

This year’s Fiscal Year 2023 government funding bill contains significant resources to support important Kentucky institutions and programs. Utilizing his role as Senate Republican Leader and as a senior member of the Senate Appropriations Committee, Sen. McConnell advocated on behalf of the University of Louisville in this year’s government funding process. That includes his support of the NSA’s cyber workforce training initiative, which has funded educational programming at the University of Louisville.

“It’s an honor to return to my alma mater and announce that NSA’s cyber workforce training initiative, which has made landmark investments in educational programming at UofL, will once again receive robust resources from this fiscal year’s government funding bill. UofL is at the center of the growing cybersecurity field, benefitting the Commonwealth’s economy and our country’s national security. I look forward to more students taking part in this program and entering the workforce with the skillset needed to succeed in the 21st century,” said Sen. McConnell.

UofL launched its Cybersecurity Workforce Certificate in 2020 thanks to $6.2 million in funding from the NSA as a pilot for a national program supported by Sen. McConnell to train a qualified cybersecurity workforce. The UofL program so far has enrolled more than 200 students, with an emphasis on training military veterans and first responders in health care cybersecurity and logistics.

“The need for highly skilled cybersecurity professionals to protect our information systems is increasing rapidly. The University of Louisville is leading the way to meet this need in developing our innovative cybersecurity workforce training program and assembling a coalition of universities to support and replicate this training on a national level,” said Lori Stewart Gonzalez, interim president of UofL. “We are grateful to Sen. McConnell for supporting this and other programs with additional funding, and for his advocacy on behalf of UofL and Kentucky.”

UofL’s cybersecurity certificate program includes online learning, hands-on applied learning labs at all levels and gamification components, along with online technology industry badging from Microsoft, IBM and Google. Students gain expertise in artificial intelligence, robotics process automation, blockchain, internet of things (IoT), machine learning and other areas to earn individual badges throughout the certificate’s 24 modules.

“With technology continuing to become more of an integral piece of our everyday lives, a strong cybersecurity industry and workforce are the most important protections we have to ensure secure businesses and critical infrastructure across the Commonwealth and nation,” said Kevin Gardner, UofL’s executive vice president for research and innovation. “As a top research institution, UofL is proud to lead the charge on this important work through groundbreaking and unparalleled research, innovation and academic programs. We appreciate Sen. McConnell’s support for advancing cybersecurity technology and growing our cybersecurity workforce.”

UofL is partnering with corporations, including logistics companies, health care providers and others, as well as other colleges and universities to create a national cybersecurity training coalition. UofL’s university partners include Kentucky Community and Technical Colleges, University of North Florida, University of Arkansas – Little Rock, City University of Seattle, Kentucky State University, Simmons College, City University of New York, Kennesaw State University, Hood College and Northwest Missouri State University. The University of West Florida and Purdue University Northwest also are building university coalitions for cybersecurity workforce training.

Interim UofL President Lori Stewart Gonzalez, left, Sharon Kerrick and Kevin Gardner joined Senate Republican Leader Mitch McConnell, second from left, on Jan. 19 to discuss resources he secured to benefit Kentucky in the recent government funding bill.
Interim UofL President Lori Stewart Gonzalez, left, Sharon Kerrick and Kevin Gardner joined Senate Republican Leader Mitch McConnell, second from left, on Jan. 19 to discuss resources he secured to benefit Kentucky in the recent government funding bill.

“This new funding can allow UofL and the other lead universities to leverage resources and initiate cooperation for the good of the entire cybersecurity national community,” said Sharon Kerrick, associate professor and assistant vice president, UofL Digital Transformation Center.

Following the initial $6.2 million in funding to launch the UofL program in 2020, the university received an additional $2.3 million to expand it to include logistics and train-the-trainer components in which students are trained to instruct others in their organizations.

The UofL provides future-focused curricula and educational tools to help train the workforce in fast-growing technology areas by integrating the best features of industry and academic institution relationships.

]]>
UofL internal grants fund research in AI, equity and more /section/science-and-tech/uofl-internal-grants-fund-research-in-ai-equity-and-more/ Mon, 25 Jul 2022 14:02:10 +0000 /?p=56862 Dozens of University of Louisville researchers have been awarded internal grant funding to explore topics ranging from artificial intelligence to COVID-19 and more.

The funding comes via through the UofL Office of Research and Innovation: the Jon Rieger Seed Grants and Programmatic Support programs.

“This internal funding provides critical support for groundbreaking research and scholarship,” said Will Metcalf, associate vice president for research and innovation. “I’m excited for the strong and diverse projects funded in this round, and look forward to seeing what these researchers accomplish.”

Jon Rieger Seed Grants provide up to $7,500 to assist full-time, active-status early career researchers in the initiation of new scholarship, creative activities and other research approaches. Winners this round were:

  • Collaborativemultimodal sensor fusion with edge intelligence for connected and autonomous vehicles (Sabur Hassan Baidya, J.B. Speed School of Engineering);
  • Assessing and responding to psychosocial and health equity needs of immigrant and refugee communities through library partnerships (Rebecka Bloomer, Kent School of Social Work);
  • Evaluation of the physicochemical properties of a new bioceramic endodontic sealer: an initial approach (Eduardo Antunes Bortoluzzi, School of Dentistry);
  • Emotions, context and alcohol use (Konrad Bresin, College of ֱ and Human Development);
  • Developing 3D-printed lattice nasopharyngeal swabs for COVID-19 tests (Yanyu Chen, J.B. Speed School of Engineering);
  • The impacts of drought on hemp physiology, chemistry, and the microbiome (Natalie Christian, College of Arts and Sciences);
  • Multi-pathogen wastewater surveillance system to improve health and stop pathogenic outbreaks within low- and middle-income country communities (Rochelle Holm, School of Medicine);
  • Reactions to experiencing discrimination (RED) study (Yara Mekawi, College of Arts and Sciences);
  • Quantifying the controls of streamflow permanence and sediment connectivity in urban headwater streams (Tyler Mahoney, J.B. Speed School of Engineering);
  • Aphysics-based machine learning framework for smart self-adaptable multi-stage manufacturing systems(Luis Segura Sangucho, J.B. Speed School of Engineering);
  • Homingin: community engaged research on LGBTQ+ youth houselessnessin Louisville, Kentucky(Cara Snyder, College of Arts and Sciences); and
  • Elicitingexpert knowledge in empirical selection of machine learning methods(Xiaomei Wang, J.B. Speed School of Engineering).

The Programmatic Support grant provides up to $3,000 of funding to assist full-time, active-status faculty with the completion of a project where other funding sources are not available. Winners this round were:

  • Humanmate-copying and the popularity of Halo in an online venue (Michael Cunningham, College of Arts and Sciences)
  • Development of a gastric reflux simulator for the analysis of teeth and dental materials (Grace DeSouza, School of Dentistry)
  • Youth/young adults of color responding to racial inequities and COVID-19 in listening sessions(Melanie Gast, College of Arts and Sciences)
  • Validating techniques for collecting vocal and listening effort during remote and in-person speech-language intervention (Maria Kondaurova, College of Arts and Sciences)
  • On the border, between empires: A bioarchaeological examination of health, diet, and biological relatedness in individuals from the cemetery of Oymaağaç during the Roman to Byzantine transition (Kathryn Marklein, College of Arts and Sciences)
  • Chancedesigns recording(John Ritz, School of Music)
  • Development of expertise in perception of speech and music (Christian Stilp, College of Arts and Sciences)
  • Automating emotional safety and post-traumatic growth: An exploratory study to investigate gender-based violence survivorsuser experiences on social media (Heather Storer, Kent School of Social Work)
  • Campus sustainability, community context(Angela Storey, College of Arts and Sciences)
  • Antibioticbone cement intramedullary nails for treating orthopaedic infections(Michael Voor, School of Medicine)
  • Exploringthe relationships between student behaviors and special education teachers’ physical well-being and instruction: a pilot study(Jeremy Whitney, College of ֱ and Human Development)
  • Effect of powder feedstock on the material characteristics of small-size Ti6Al4V geometries fabricated by laser powder bed fusion additive manufacturing (Li Yang, J.B. Speed School of Engineering)
  • Translation of the Chinesefashion industry: an ethnographic approach (Jianhua Zhao, College of Arts and Sciences)

In addition to the programmatic and Rieger grants, two more internal grants programs accept applications annually in fall: Collaborative Mentoring Grants (up to $10,000) and Capacity Building Grants (up to $25,000). Open applications will be announced in September with application deadlines in late October. More information is available on the Office of Research and Innovation .

]]>
UofL teams with Microsoft to explore AI in research /section/science-and-tech/uofl-teams-with-microsoft-to-explore-ai-in-research/ Mon, 09 May 2022 14:56:41 +0000 /?p=56368 The University of Louisville is one of a handful of schools selected by Microsoft to explore how artificial intelligence can be used to help researchers.

UofL is one of seven Microsoft Academic Research Consultants, or MARCs, that will study how researchers might leverage the technology to, for example, sift through large data sets and glean insights. The idea is to understand needs and develop next-generation tools and training that could generate more groundbreaking research here and around the world.

“UofL is home to a rich pool of top researchers in high-tech, cutting-edge fields,” said Sharon Kerrick, an assistant vice president at UofL and head of the , which will lead the on-campus Microsoft effort. “We at the DTC are proud to be among the other top schools to partner with Microsoft to enable groundbreaking research that’s engineering our future economy.”

The other MARC schools are Duke University, the University of Rochester, the University of Central Florida, the University of South Florida, Texas A&M, Oregon State University and Washington University – St. Louis. The MARCs will serve as liaisons between Microsoft and researchers, seeking to better understand how AI is being and could be used.

UofL has significant earned expertise in this kind of tech-enabled education and research; some researchers are already using computing, big data and artificial intelligence to screen potential drugs and compounds against and , to analyze medical images and more.

UofL also was recently selected by the U.S. Department of Defense to work on research and education to strengthen the countrys cyber defenses. UofL was the only school selected from Kentucky for both networks and one of only a handful to hold the competitive Research-1 classification from the Carnegie Classification of Institutions of Higher ֱ. UofL also recently received significant funding to develop cybersecurity education and conduct cutting-edge biometrics research.

“UofL has a strong record of researching the digital frontier, artificial intelligence and other technologies,” said Kerrick. “Through this new partnership with Microsoft, we hope to find new ways leverage those same technologies to benefit researchers.” 

]]>
UofL awarded nearly $4M to close skills gap /section/science-and-tech/uofl-awarded-nearly-4-million-to-close-the-skills-gap/ Fri, 28 Feb 2020 14:51:53 +0000 http://www.uoflnews.com/?p=49733 ​The University of Louisville has received nearly $4 million from the U.S. Department of Labor to build a program that will prepare students for the ever-evolving, technology-enabled “jobs of tomorrow.”

The competitive federal grant was announced by U.S. Senate Majority Leader Mitch McConnell, a UofL grad.

​The UofL Modern Apprenticeship Pathways to Success (MAPS) program is funded through the DoL’s “” initiative. UofL was one of just 28 public-private partnerships funded under this federal program in its most recent round, and is the only one in Kentucky.

​Through MAPS, UofL will create apprenticeships that connect what students learn in class with their eventual careers. The apprenticeships will also give them field experience with disruptive, cutting-edge technologies that can change how work is done.

​“At UofL, we recognize that many people entering such industries as advanced manufacturing, healthcare and information technology require new skill sets or retraining in order to be successful,” said UofL President Neeli Bendapudi. “The apprenticeships created by the university and its private-sector partners through this grant program will help to form the workforce of the future.” ​

UofL will also work with three academic partners — Webster University, Jefferson Community and Technical College and Elizabethtown Community and Technical College. These institutions will help MAPS create transfer opportunities for associate’s degree holders who want to earn a bachelor’s degree, and connect with underrepresented minority students and those who are, have been or depend on a member of the military. ​

Principal investigator Dr. Jeffrey Sun, of the UofL College of ֱ and Human Development (CEHD), said preparing students for high-skilled jobs is especially important now, at a time when the world of work is increasingly disrupted and evolving due to technologies like artificial intelligence and automation. ​

According to a from the Brookings Institute, automation will be most disruptive in the Heartland, and especially in Kentucky and Indiana. In the Louisville Metropolitan Statistical Area alone, the report says some 670,000 jobs are susceptible. ​

But while automation may replace some jobs, some reports show it creates others — ones companies can’t seem to fill due to the skills gap. According to a from Deloitte, advanced technologies in the manufacturing industry will cause an estimated 2.4 million positions to go unfilled between 2018 and 2028.

​“The workforce in the Heartland is underemployed, mostly due to manufacturing layoffs and the unpreparedness of workers for higher-skilled jobs,” said Sun, associate dean for Innovation and Strategic Partnerships at the CEHD. “We want our students at UofL to be prepared when new technologies, such as robotics and AI, alter our work or the market shifts, perhaps from 3D printing, change our business model.”

“By equipping job seekers with the training they need for good, 21st-century jobs, we can help close the skills gap and build upon Kentucky’s growing economy,”McConnell said in a release.“I applaud President Trump for his administration’s focus on apprenticeship programs, and I’m proud to work with him to promote investment in the future of Kentucky’s workers and their families. As Senate Majority Leader, I’m in a better position than ever to deliver for Kentucky communities, and I was proud to partner with UofL to give Kentucky workers every opportunity to succeed.”

]]>
UofL AI diagnostics researcher inducted into National Academy of Inventors /section/science-and-tech/uofl-ai-diagnostics-researcher-inducted-into-national-academy-of-inventors/ Wed, 18 Dec 2019 19:50:00 +0000 http://www.uoflnews.com/?p=49182 University of Louisville researcher Ayman El-Baz, whose work blends artificial intelligence and medical imaging, has been inducted as a Fellow into the National Academy of Inventors.

He and 167 other inventors from institutions around the world will be formally recognized as 2019 NAI Fellows at a ceremony in Phoenix, Arizona, in April 2020, according to a .

“It is a great honor for me to be one of the NAI fellows,” said El-Baz, a UofL J.B. Speed School of Engineering alum and chair of bioengineering.

At UofL, El-Baz works at the intersection of computer science and medicine. Many of his inventions use artificial intelligence to analyze medical images, allowing them to very accurately diagnose everything from to to .

El-Baz is the sixth UofL inventor to be inducted into the NAI, following Suzanne Ildstad and Kevin Walsh (2014); William Pierce (2015); Paula Bates (2016); and most recently, Robert S. Keynton (2017).

“We’re very proud of Ayman, and all past UofL inductees, for this huge accomplishment and all the hard work behind it,” said Allen Morris, executive director of the . His office works with UofL researchers, like , to commercialize their inventions.

“This kind of honor shows our university’s commitment to and leadership in research, invention and technology commercialization,” he said. “These inventions have the power to change and improve the way we work and live.”

Aside from the EPI-Center, El-Baz has also worked with other UofL programs for technology development and commercialization. He was the first researcher to hit a “trifecta” with UofL’s suite of, having earned entry into the UofL Coulter Translational Partnership, NSF I-Corps and NSF AWARE:ACCESS programs.

“These crucial support mechanisms have enabled me to develop and translate technologies from ideation to commercialization quickly,” El-Baz said.

To date, El-Baz holds eight patents, five copyrights and has had 11 technologies optioned and two have been licensed to companies for further development and commercialization. Some technologies have also resulted in startup ventures like Autism Diagnostics Technologies Inc., which El-Baz co-founded, creating jobs and economic development.

NAI fellows hold a collective 41,500 issued U.S. patents, resulting in 11,000 licensed technologies and companies, generating more than 36 million jobs and $1.6 trillion in revenue, according to the release.

“I am so impressed by the caliber of this year’s class of NAI Fellows, all of whom are highly-regarded in their respective fields,” NAI President Paul R. Sanberg said in the release. “The breadth and scope of their discovery is truly staggering. I’m excited not only see their work continue, but also to see their knowledge influence a new era of science, technology, and innovation worldwide.”

]]>