Pablo Rivas

Expansive leaps in technologies like AI prompt important questions about their impact on people. Pablo Rivas, assistant professor of computer science, is a leader at the intersection of technology and ethics. Rivas, Baylor’s site-director for the Center on Responsible Artificial Intelligence and Governance, unpacks questions around responsible AI and examines the role Baylor can play as an ethical voice amidst this growth.
Transcript
Derek Smith:
Hello and welcome to Baylor Connections, a conversation series with the people shaping our future. Each week we go in depth with Baylor leaders, professors, and more discussing important topics in higher education, research and student life. I'm Derek Smith, and today we are talking humans and technology with Pablo Rivas. Dr. Rivas serves as assistant professor of computer science at Baylor, an expert in emerging technologies and ethics, Rivas studies, artificial intelligence, machine learning, and more through a viewpoint that blends technical expertise with an ethical viewpoint in human impact.
In addition to being a widely published researcher and author, Rivas is also the site director for the Center on Responsible Artificial Intelligence and Governance at Baylor, also known as CRAIG. It's the only National Science Foundation funded research center focused exclusively on responsible AI. At Baylor, he also serves as the co-convener of the Data Ethics and AI work group in the Baylor Ethics Initiative. A great person for us to talk to about all the changing technologies and their impact. Dr. Rivas, thanks so much for joining us on the program today.
Pablo Rivas:
Thank you for having me on the podcast. I'm excited to share some of the work in AI.
Derek Smith:
Well, it's work that you and other colleagues across campus are doing in some exciting ways, and as we're going to talk about on the show, we're only going to see that grow as we move into Baylor in Deeds, the university's new strategic plan. But let's talk about some definitions, some broad topics before we get to that. And I'm curious just to start off, when you tell people outside of your field that you study AI, whether they're in higher education or outside of it, someone at church or the community, I'm curious, are there common questions or comments that you receive when you tell people what you do?
Pablo Rivas:
Yes, I've noticed two main reactions when people hear I work in AI. The first one is just sheer excitement. People assume that everything these days is powered by AI, like marketing buzzwords can make it sound like our phones, thermostats, even toasters, all are brimming or have some type of super intelligent software in them. And in reality, not every piece of software is generally AI driven, and sometimes companies label them as AI because they have some type of semi-intelligence or a very good algorithm in there. So a lot of my initial conversations go around that clarifying those, what we mean by intelligence in machines and pointing out that we have to go, we're far away from having robots to do all the thinking for us.
And the second reaction is it's a little bit of intimidation from people. Sometimes they assume that because I'm in AI, I'm an AI expert on every single sub-discipline from deep reinforcement learning or AI-based cryptography. But AI is really big and so vast that it's impossible for one single person to know everything there is. And so it's mostly interdisciplinary work. And so also talk about how I work in very specific spaces in AI and how collaboration is usually critical for success of AI in different places. That's how those conversations go, usually.
Derek Smith:
Well, Dr. Rivas, you mentioned that you have some very specific focuses. What are those? How would you describe your realm within the broader AI research landscape?
Pablo Rivas:
I think I would describe it in four major areas. One of them is known as natural language processing, and this is where I study how people communicate online, focusing on open forums where illicit activities might occur such as human trafficking or the sale of stolen car parts. The goal is to build AI systems that can detect, flag, and ultimately prevent harmful or illegal behavior in that space. Another area is computer vision. And so my team develops algorithms that identify objects and patterns in images. For example, we're collaborating the Department of Biology here at Baylor to recognize and track leopard seals from just regular photography. And this has huge implications for the wildlife research, and the conservation aspects of these other species because leopard seals are predatory species, so helping us understand where they go and how they populate the planet helps then understand how invasive these species is.
Another area is cyber security. We currently are working with the initiative known at Baylor as the Central Texas Cyber range in the BRIC. And we work there to ensure AI is able to analyze network patterns and traffic and detect malicious activities and potential attacks and helping have a more safe cyberspace. And finally, the most proud I am of is AI Ethics and responsible AI, which we work to investigate AI systems to make them safer and more equitable. And this includes examining how AI can fail sometimes unexpectedly or if it's exploited by bad actors and how we can help create safeguards or mitigate those risks. And so I'm working with the next round of funding for the Center of responsible AI, and we're collaborating with industry partners to create more robust and transparent trustworthy AI solutions.
Derek Smith:
Well, that's a great description of these different threads that come together in your work, as we visit with Dr. Pablo Rivas in Baylor School of Engineering and Computer Science. I want to dive into this a little more, particularly on the ethical front as Baylor focuses on the intersection of people and technology, and let's define a couple of things. So for those of us who are out of your field, you can give us what the basic understanding of a couple of these are. First kind I want to start with are artificial intelligence and machine learning. How related are they and how can we think of them?
Pablo Rivas:
Yeah. I'll try to be as precise as I can here. AI, artificial intelligence, it's a broad area. It's a broad idea that focuses on creating machines of software that can perform tasks that typically require some type of human intelligence. Things like understanding language, recognizing images, making complex decisions, or even generating some type of creative content. So we humans do this. But machine learning is a subset of AI that focuses specifically on algorithms, statistical models that enable computers to learn from data. So machine learning is today very data hungry and it's programmed specifically to operate in different type of scenarios. Think of it as an engine that drives applications or systems and it discovers patterns to make predictions or automatic decisions.
Derek Smith:
So machine learning is one subset. Are there any other common subsets that we might hear about when we hear stories on the news or other Baylor faculty talking about this?
Pablo Rivas:
I think you may hear also a lot about deep learning. And deep learning is also a part of machine learning and artificial intelligence, whereas a specific case of machine learning that it's powered by large amounts of data and large amounts of compute power. So it's an area that is very exciting today. If you have enough resources to do research in this space, you can create very large models like large language models or vision models that can understand the world better or communicate with humans better in a seamless way.
Derek Smith:
So obviously as a computer scientist, you Dr. Rivas have an interest in understanding of the technical aspects, what makes these work, what makes these tick. And you've applied that as you mentioned, like helping law enforcement look for illicit activity online. But you're also interested in the ethical implications, the questions of these growing technologies and the powers they give people. So I'm curious, where did that come together for you and what are some of the ways you've been able to enhance your own ethical expertise and understanding alongside the scientific and technical side?
Pablo Rivas:
I'm not considering myself an ethics expert in the sense that traditionally ethics has been thought about, I think I would say more applied ethics when I consider myself in that space. If there's anyone that knows philosophy very well, they know exactly what I'm talking about. I think my work is very, very applied. And my journey began around 2015, shortly after I finished my seminary degree here at Truett. And then I went to have my first job as a computer science professor with seminary training, which is a strange combo. And having that background gave me a very unique perspective on ethics and social justice issues.
And so when I was put to teach in a undergraduate course and it was called Computer Ethics. And when we look at all the issues that computers or technology face, AI was at the top of that conversation. So that led me to focus more teaching undergrads to think ethically about AI and open my eyes to a lot of dilemmas and problems. And so I found myself spending a lot of time joining those conversations and dealing with issues that came to light when there were perspectives from industry, academia brought together. And that was a very eye-opening world to be part of.
Derek Smith:
This is Baylor Connections. We are visiting with Pablo Rivas, assistant professor of computer science at Baylor and site director for the Center on Responsible Artificial Intelligence and Governance at Baylor, also known as CRAIG. Well, Dr. Rivas, you mentioned ethical dilemmas, and I know that they continually change, but here as we look at early '25, what are some of the more important ethical dilemmas right now you yourself are studying, as you look at industry as a whole, you see them needing to consider as well?
Pablo Rivas:
I think there are three major big, big areas or dilemmas, or I'll call them aspects. FAT, fat. One is fairness, second accountability, and the third one is transparency. In fairness, we see or study AI systems to make sure that they don't discriminate across race, gender, or socioeconomic status or any other dimension that it's important to us as a society. And this involves, of course, thinking ethically about the data that we use and testing systems to make sure they have a performance that is considered fair in the applications that we need them. Because not everything needs to be fair with respect to gender, for example. So we need to be very specific on where the systems are applied. Accountability is one of the big piece because as AI gets more powerful, we need to clarify who's responsible when systems make mistakes or cause harm. So is a developer responsible or the organization or the end user or it's a shared responsibility, and what does that imply? So creating this or making this clear, it's essential to move forward with AI adoption.
And lastly, transparency. I think people should know why AI makes certain decisions. We don't need to know, or at least the average person doesn't need to know, I would like to know, of course, but average person doesn't need to know how exactly AI works. But as long as you understand why a decision is made, and there's a degree of interpretability, that can help people trust more and verify the results that AI is providing. And so these help us create a robust governance and ensure that these principles are upheld and have some support for our society.
Derek Smith:
Well, individuals like yourself and colleagues at Baylor and other institutions are focused on the idea of responsible AI and educating other people in that. So I'm curious, you mentioned CRAIG at the top of the show, you're the site director at Baylor. What is CRAIG and what are some of the areas we're seeing research emanate from that center?
Pablo Rivas:
CRAIG stands for the Center on Responsible Artificial Intelligence and Governance. It's a very clever acronym that we came up with. And it's an IUCRC, which is another acronym by the National Science Foundation. It means Industry University Collaborative Research Center. And so the National Science Foundation put this program of collaborative research centers to partner with industry and academia and create a consortium that today includes Baylor, Ohio State University, Rutgers University, and Northeastern University. And each of these institutions bring their strengths in different areas like AI safety, robustness, ethics, and governance. And right now we are awaiting the next cycle of funding from NSF, and we have a list of industry who have committed to fund the center.
And the ultimate goal is to create a space where academic and industry professionals work side by side to solve real problems and challenges in AI that go from dealing with vulnerabilities. So here at Baylor, we can bring a client and say, we can help you create this product if you are concerned about vulnerabilities. So we can create tests and fix any issues. And with our partners, we can help these companies navigate any emerging regulations or standards, create processes for them to ensure that they could put a stamp on their products that say, this is made responsibly. And the consumer can trust that and will hopefully help the adoption of these products or services.
Derek Smith:
Well, and Dr. Rivas, as you described that, what Baylor's involved in, it fits into where Baylor's going in this new season of the strategic plan, Baylor in Deeds, one focus within the commitment on interdisciplinary research is a focus on human interface with technology. Look, what blending technical expertise, ethics, the university's Christian foundation to have a voice in what you've just described. What is that and where do you see Baylor growing in this area? And how do units like the ethics initiative, like CRAIG, like others blend together, I think, to promote that?
Pablo Rivas:
This is a good question. I think Baylor and this new strategic plan that specifically mentioned AI, I think it's putting us in a unique position being an institution when we think about AI having a moral and a spiritual dimension of how we think about the things we do and how we share it with students and humanity in general. So as we move forward, I see Baylor working with faculty across different disciplines, not only engineering, computer science, but philosophy, theology, psychology, biology, and many, many others, tackling questions that are important on how technology can enhance rather than diminish human dignity. I think our students here at Baylor can become professionals trained on doing responsible thinking on AI, ethical thinking, and not just building things for profit, profit is good, but keeping in mind the moral and ethical responsibilities, I think it's important. And in many ways I believe that here at Baylor, the virtues of honesty, justice are part of the [inaudible 00:18:42] educational experience. And that's where Baylor's faith-based mission really aligns with the future of AI research and development.
And the Baylor AI Ethics Initiative in particular, it's a community of scholars and practitioners here at Baylor dedicated to explore questions about belief, practice, cultural, economic, political systems. But the goal of the group is to inspire research and dialogue among faculty and staff navigating ethical challenges. And in particular, the group that I am leading or co-leading with Dr. Neil Messer from the Department of Religion, is called the AI and Data Ethics Research Group. And so here we're looking at how AI and technologies that are data-driven are making impact in society, and we focus on privacy, fairness, and other issues of the common good. But there are other groups, research groups as part of the initiative that are bioethics, ethics and leadership, and global ethics. And so all of these are heading towards having an environment for ethical reflection, ethical thinking, and collectively doing research as a group.
Derek Smith:
Well, Dr. Rivas, as we head to the final couple of minutes on the program, my sense is obviously you've been doing work in this area a while along with other colleagues. The Ethics Initiative Group has been formed, you've been working together. But as we move into Baylor in Deeds, I think we'll see even more, and we'll have to have you and your colleagues on again to talk about some of the fruits of this focus. But I'm curious, just to sum it up, when you think about the opportunities for Baylor, for the School of Engineering and Computer Science at Baylor, why is this now the right time and the right place for us to be focusing on this? And what are you most excited about seeing as it grows?
Pablo Rivas:
Well, I'm super excited that we have the R1 recognition as a research institution that opens up a of new opportunities, in particular in in ECS, for example, we have a new dean also bringing a new dynamic vision that involves AI and ethical thinking as well. And we have critical mass right now with faculty, with very state-of-the-art expertise and research that goes from AI, data science, cybersecurity, and ethics. And because of the culture that we have here at the Department of Computer Science and ECS and Baylor as a whole, I think there's a very unique space where we can come together and talk about some of the biggest problems that we can solve with AI. And beyond, of course, ECS, law, business, philosophy, religion, everybody can contribute by joining the efforts of the Ethics Initiative and the AI Ethics Group.
And we have momentum. We're starting with a new group, a new cohort of 12 faculty outside Computer Science and ECS that have just put efforts in discussing the next steps in AI and ethics research. So I think we have good momentum, and Baylor is poised to make some impact in the next few months and years.
Derek Smith:
Well, it's an exciting time and we look forward to seeing the impact of that. I know it's important for Baylor's Christian voice to play a role in the way people view, shape, and approach some of these technologies. Well, Dr. Rivas, it's great to have you on the program again. Thanks for taking the time to share with us and educate us, and we look forward to talking to you again one of these days.
Pablo Rivas:
Thanks for having me, Derek. I appreciate it.
Derek Smith:
Great to have you with us. Pablo Rivas, Assistant Professor of Computer Science and Site Director of the Center on Responsible Artificial Intelligence and Governance, our guest today on Baylor Connections. I'm Derek Smith. A reminder you can hear this and other programs online at baylor.edu/connections and you can subscribe on iTunes. Thanks for joining us here on Baylor Connections.