Making the leap from Philosophy to Software Engineering with Caleb Ontiveros
Caleb Ontiveros, who left the University of Notre Dame’s PhD program in philosophy to pursue a career in tech, is often asked by people studying philosophy how he transitioned to software engineering and programming.
Caleb created the Facebook group, Philosophers in Software Engineering to help others like him. The group’s members collect and share advice for philosophers interested in learning how to program or in careers in software engineering.
In this episode, Caleb chats with Ledge about philosophical thinking in software, and how software can enhance philosophical thinking.
Transcript
Ledge: Hey, Caleb. Thanks for joining us today. Really appreciate it.
Caleb: Thanks for having me on, Ledge.
Ledge: If you don’t mind, give a little background. You and I met online and you’re doing some really interesting work. I thought it would be cool for the audience to find out about and learn some of your perspectives.
Caleb: Sure. I’m in the Bay Area right now. I’ve been in startups for nearly three years now. Before then, I was studying philosophy in grad school and then transitioned into software engineering. Learned how to program, and I started working in startups.
Ledge: Right. That transition from philosophy to software engineering made you a topical expert in something that actually seems pretty popular. You kind of put together a little community around people who are interested in making that conversion. It’s not two things that you often hear together a lot.
I just wonder about that experience.
Caleb: I created a Facebook group called Philosophers in Software Engineering, and that was posted on a few academic philosophy blogs and got a lot of engagement. Tons of people joined, from professors to software engineers.
Initially, you might not thing there’s a whole lot of overlap, but I think there are a number of different ways in which philosophy and programming come together.
Ledge: What is that? What’s the framework? Now you’re famous and you need to be the expert in something you just kind of picked up as a hobby.
Caleb: There are two features of philosophy that make moving from philosophy to programming sort of natural.
One feature is logic. A lot of philosophers spend time working in, say, logic, which is quite similar to what you’ll be doing when you’re writing a program. Simple if/then statements. Thinking about different forms of logical operators. That sort of thing.
The second thing that philosophy often stressed is conceptual rigor. Being very clear about the sorts of claims you are making, your reasons for those claims, and ensuring that other people can understand what you’re talking about.
Those things are super important in software engineering. When, say you want to name a particular table, what are you going to name that table? Is that table going to communicate the thing that you want it to communicate? What sort of role should it play in your conceptual schema of the data model? That sort of thing. Those skills are super useful.
Ledge: That’s really interesting. How do you see it play out in the day-to-day work? Is this something that you thought would be the case, or you just discovered it after the fact and then started thinking about it?
Caleb: I guess one thing that also made the transition make sense – and another reason why a lot of other people in academic philosophy are thinking about it – is that the philosophy job market is pretty hard. A lot of people like to think about. There are only so many positions available, and that programming is a possible thing one can do. That’s another connection.
In terms of the day-to-day, when you’re programming sometimes you’re just on a grind. You know what you’re doing already. You’re just typing away. That sort of thing.
A lot of programming, you spent most of the time thinking about how you’re going to communicate your ideas – less time actually typing.
Ledge: Right. What are some of the challenges? It can’t be all perfect. What surprised you, making the jump? Where are the places that maybe a former academic might struggle in software in a high tech world.
Caleb: Sure. Maybe I’ll just say a little bit more about how I transitioned into the path, from leaving grad school to getting a job.
What I did is I started teaching myself a little bit of Ruby and JavaScript on the side while I was in grad school. Found out I really enjoyed it. This is something that I advise lots of people who are interested in programming, or most people, just try it and see if you like it.
Then, what I did is, there’s sort of a decision point. If you wanted to go to a coding boot camp or a school, or do you want to continue teaching yourself? I decided to go to a coding boot camp called App Academy. There are quite a few other good coding boot camps as well but the main reason I went to App Academy is that they are invested in your success. You make some down payment and then the main bit of their profit will come from some percentage of your salary after you get a job. They will refund your deposit if you do not get a job.
I admired that financial framework, and I knew quite a few people who had had good experiences with that boot camp so I decided to attend this coding boot camp. This was basically just like three months of programming all day, which was fantastic. It was a great environment to learn. There were a lot of other students there and everyone is quite excited, very motivated to be learning.
Ledge: It’s probably all different backgrounds. People who were getting into it at all different stages and backgrounds of their career, of their academics. I’ve got to imagine the first couple of days are probably kind of nerve wracking. You’re making a major shift in your whole life there, and there aren’t a lot of places besides coding that you can go full boot camp mode for three months and then think, I’m in a new career.
Caleb: Yeah. The transition is pretty fast. After the boot camp, it only took about a month before I had my first job offer, and a lot of other students had that experience as well.
I think for me, it’s mostly actually pretty exciting. The fact that you can just pick up something so quickly, and that there are other people around you who are also really excited about this thing and invested in their own and your success as well. It really made for a really good experience.
Ledge: What’s the experience been like of building the community online? That’s super cool that it’s just organic and it got picked up and people resonated with it. Did you expect that, or did you expect to be like three people in there and not sure?
Caleb: I guess if you had asked me to guess how many members there would be, I probably would have guessed that there would be on the order of hundreds. That’s mostly just because I already know quite a few philosophers who made the transition like myself. I get asked by people who are in grad school for advice about how to think about programming. Then I also know people who are software engineers and are just curious about philosophy. Maybe they haven’t formally studied it, but would like to talk to philosophers and would like to talk about philosophical topics in software engineering.
Ledge: So it goes back the other way too. You’re getting engineers that want to transition or think about philosophy; have a deeper view into their work.
What’s that look like? What are those conversations like? That’s got to be a lot of fun.
Caleb: Absolutely. I guess there are esoteric discussions about the nature of computer programs, or questions about consciousness and computer programs. To maybe more concrete questions like, what are the ethical implications of particular actions by tech companies, or particular technologies? Getting people to talk together about that is a lot of fun.
Ledge: Does it go into the ethical realm? I guess a lot of the ethical IT and ethical AI? There’s got to be a lot of that stuff going on these days. You can go pretty deep down that rabbit hole.
Caleb: Yeah. There are just so many different topics. From automation type concerns to what we sometimes call the of the digital economy and how you should think about how important your intention is. How you should treat the intention of your users and the ethical boundaries of that sort of work. To questions about, how is artificial intelligence going to be developed in a safe and just manner?
Yes. much I suppose as areas of life that touches on the ethical domain.
Ledge: Do you get to stretch the full academic… Go ahead.
Caleb: I was just going to say, sort of bouncing back and forth a little bit here. With respect to interactions in the group, one thing that’s especially been exciting to see is people who are currently in tech people helping out people who are curious about particular fields – whether it’s machine learning or web development.
Ledge: Is there an eagerness to…
Caleb: Just this morning, someone posted about they wanted to learn more about machine learning. They’re starting to teach themselves a little bit of Python, this sort of thing. Then a fellow who studied philosophy but now founded a startup in the Bay Area reached out to him, and will hopefully provide some insight.
Ledge: That’s cool. Is there an eagerness to inject more, I don’t know, philosophic, academic thought into what had been just a tech conversation?
Caleb: Yeah. I think so. I think that’s definitely true. I think one thing that makes this pretty salient is a lot of the current news with larger tech companies. How they use their products and how they might treat their employees. Especially if you’re working poor in tech. Most people want to ensure that they are living well and not doing things that are bad. They want to be thoughtful. I think that’s the source of that kind of eagerness.
Ledge: I talked to Liz Fong-Jones about the way that people can evaluate the technology and companies that they’re working for and the things they’re working on, and the deep questions to ask even before you take a job. Will my work be used on certain kinds of, I don’t know, drone technology or facial recognition and government work? Things that could be stuff I never imagined when I wanted to become a technologist. How we have to be diligent about asking those questions on the way in.
I imagine that kind of deeper thought pattern and thinking is really core to what you’re talking about.
Caleb: I think it also goes the other way as well. There’s also questions about, how can we apply software to improve how we think about these questions? Are there interesting ways we can visualize these problems? Are there systems we can build to help us communicate these ideas in a more clear and rigorous manner?
Ledge: Is there a futurist kind of discussion that goes on, what’s happening next? That sounds like a predictive, fun academic and thoughtful rabbit hole that you could dive down and say, if we had known before what we know now and we had these different ways, diverse ways of thinking in our discipline, might we have done things different? How are we going to do things in the future now that we maybe can open those conversations?
Caleb: One thing I’m especially excited about is, in philosophy are digital ways of expressing philosophical arguments.
In the field of philosophy, a lot of people write books, a lot of people write papers, this sort of thing. But these ways of communication don’t have version control. It’s a lot harder to see connections between papers that are written 10 years ago and this sort of thing.
It would be really great, I think, if people were to devise systems to express these kinds of ideas using software.
Ledge: Get some more details around that. That’s super interesting. What are some examples?
Caleb: Some of the things I’ve been prototyping in the past are argument type mapping systems. Where, say, in philosophy you might have a standard – other fields as well – argument form of, ‘if P then Q, P therefore Q’. That sort of thing. And then you fill up the with actual content.
You can create a program that will let you put in different propositions and different logical relationships between those propositions. Over time, you can amass a large database of different philosophical claims.
I think that sort of work could be super exciting. If you imagine clusters of people inputting a particular proposition, like, “I believe some argument for why facial recognition is ethically worrisome,| or something like this. “It violates people’s privacy rights.” Someone can make particular claims about that.
Then you can imagine other people inputting other claims. Logical relationships between those claims. Maybe even pushing back at the original proposition – what does this mean? You can have all this discussion in one place.
I think that would have quite large returns, especially as it’s used over time.
Ledge: It strikes me as almost an application of smart contracts. Where you’d have a logical framework by which to transact and kind of go back and forth and make philosophically sound automated debates, almost. That you could inject a measure of human thoughtfulness into a transaction base. Is anybody talking about that kind of stuff?
Caleb: I don’t know if anyone’s talking about that in particular Yeah, that’s pretty interesting, that kind of smart contracts platform.
I guess one way in which it is similar is, you can imagine someone making an argument in one field, and then that makes all these commitments in a completely separate domain just because of logical rules and the other propositions in play. That’s definitely a similarity.
I think that the ability to see those connections between completely different domains would be super useful. It’s very hard to do if you’re doing this on pen and paper.
Ledge: Yeah. If you do a cross-disciplinary or cross-dimensional sort of evaluation of frameworks and thought processes. Then can zoom out and go, well, this over here in biology and this over here in machine learning, and if we can apply one framework across that and align the vocabularies, it has tones of the universal formula. The unifying theorem of all the disciplines.
That would be fun. I keep using the term rabbit hole, but this is the kind of stuff I think all of us nerd out on in our space. That we get to model real world stuff, and the more we can model things that really approximate the real world complexities, the happier we get.
Caleb: Absolutely. I think there are huge returns here. Creating platforms like this would take advantage of the AI systems better potentially in the future.
Ledge: We’re from day-to-day, where does it come into your day-to-day work? Or does it have to be the hobbyist view, kind of on the side still at this point?
Caleb: Day-to-day I’m mostly thinking about maybe more concrete questions and startups. Let’s see. I guess some philosophical ideas that I’m interested and thinking about on a daily basis are ideas related to self-transformation and wellness type apps.
There are quite a few apps related to meditation, related to therapy and this sort of thing. That kind of platform, those kind of mobile apps, are good ways to communicate philosophical ideas, and combine that with things like wellness.
One project I’ve worked on in the past is an app that combines the philosophy of stoicism and meditation, and it’s something I’m exploring further.
Ledge: Yeah. Talk more about that. That’s neat.
Caleb: The philosophy of stoicism, it’s an ancient Greek and Roman philosophy. One of the main upshots is that it’s important to be very vigilant about what is under your control and what is outside of your control. A lot of things follow from a certain point of.
This idea can be communicated quite well in a mobile app by you just provide a number of different exercises to teach people about stoicism, to get a regular meditation practice or journaling practice. Stoics, we’re very long on things like reflection and journaling. Creating tools to help people do that and apply that in their lives is quite exciting, I think.
Ledge: Tons of opportunities there. There’s a lot of work around the guided meditation and such, but those feedback loops maybe not yet being built in. That would be next level – 2.0, 3.0 – of that kind of experience. So it’s not just a consumption model but it’s an interactive type of model. Maybe with some kind of analysis of the journal after the fact. Or like, what about this, what about that? Additional thinking exercise that are personalized based on the feedback.
Caleb: Absolutely. I think just the space of audio to human interactions is still very underexplored – the thing about meditation. There’s tons of content for the people that would love to consume and interact with. That needs to be created.
Ledge: What are you reading right now?
Caleb: The last book I read was a book called The Motivation Hacker, that’s on productivity. That was quite good. One of the main features I liked about that book is that the fellow is clearly a real person. Sometimes you read productivity books and it seems like, oh, this person is not a mortal. They have more than 24 hours in a day. But this book was especially useful because it’s from the perspective of some person who’s very clear about, these are the things I want to do, one of these things is writing this actual book in three months. He provides a pretty clear framework for how one can be more productive.
Ledge: Nice. So, we like to ask everybody, Star Wars or Star Trek?
Caleb: I think I’m a Star Wars person.
Ledge: We’re keeping track.
Ledge: What can’t you live without?
Caleb: What can’t I live without? Food? I’m not sure.
Ledge: What was the last thing you Googled for work?
Caleb: Let’s see. The last thing I Googled for work was probably something related to a React Native library. Yeah. It’s probably something like, why are my versions… What versions do I need to get anything to.
Ledge: It’s almost always like, why doesn’t something work?
Last question. I love to ask this. I think you’ll dig this as a philosopher. Are you a fan of The Office?
Caleb: You know, I haven’t seen that much of The Office. I know the characters.
Ledge: You know the characters, right? There’s this classic episode where Jim is messing with Dwight, and he’s sending him faxes from future Dwight. He’s saying like, the coffee is poisoned and he’s sort of messing with him all day long.
I like to ask people, if I gave you a piece of paper and a nice, thick black Sharpie and you got to scroll one fax on there and send it back to yourself. You’re future Caleb and you get to fax yourself in the past. What do you write on that piece of paper, and why?
Caleb: Oh, cool. When is it going. Like 10 years back or something like that.
Ledge: You can choose. How about 10 years?
Caleb: Let’s see. I think it would be something like… Man, this is really hard. I think I’d probably say something like, “Read Ayn Rand.”
I read Ayn Rand recently and it wasn’t in particular. I think I had the sense that earlier me would have really liked that book.
Ledge: Yeah. I’m an Atlas Shrugged person myself.
Caleb: I didn’t expect to enjoy the book so much. I know Ayn Rand and philosophy is a little bit of a controversial.
Ledge: It can certainly get people debating a little bit. I love that. It’s a very philosophical answer. I dig that. A lot of times people are like, “Take that job,” or “Hire that guy.”
Cool, man. This is a great topic. Are you looking for more people in the Facebook group, or how can people get in touch with you if they’re interested in this?
Caleb: Absolutely. They can go to Philosophers in Software Engineering. It’s just a Facebook group. You can also follow me on Twitter at I think it’s.
Ledge: We’ll check it out. We’ll make sure it’s in the show notes.
Caleb, thanks for spending time with us. Totally appreciate it.