Dive deep into the cutting-edge world of AI with our latest podcast episode! Listen in as Jennifer Evans moderates a thought-provoking panel discussion on the future of artificial intelligence. Join industry leaders Patrick McGrath, Stacy Thorwart, and Dan Williamson as they explore the latest trends and innovations in AI. The audio from this insightful event has been transcribed using Adobe Premier Pro for your convenience.

LISTEN NOW

The New Frontier; Exploring the AI Landscape – Audio Transcription

Jennifer Evans, bkm OfficeWorks VP Design + Development:
Today’s discussion explores the AI landscape. We are talking with three industry experts to understand how they are currently using AI, what they predict for the future of AI, and how we can get started with AI in our business and personal efforts. All right. Thank you for being here, everybody. Thank you, panelists for coming off of you flew here to be with us today. Patrick navigated some Orange County traffic, which feels just as daunting.

I will start out with some questions about wanted you to know that we will be taking Q&A from the audience kind of mid and later on in the session. So get your questions ready. I think we’re going to have a great session today. So I want to get started with just asking for definition. I want some framework like what is AI? We can just start there because I think a lot of us are confused. And outside of playing with ChatGPT, I am one of those confused people so down and would love to direct that question to you.

Patrick McGrath, President West Region at Savills: 
Yeah, sure. I think when I like to talk about what AI is, I do like to start with the framework of understanding the different tiers of AI. So I usually like your largest bucket that most things, algorithms things will fit into. And then you have different tiers that go down there. Machine learning is like a more specific study within computer science. And then you have deep learning. And so there’s like these more specific uses. And then you get down to within the deep learning aspect you get what’s called generative design, which are generative AI, which is what ChatGPT is. So that’s like the most advanced that we’ve gotten with so many AI capabilities. But there’s a whole landscape of AI out there. And really AI to me is a branch of computer science that’s really about trying to make machines mimic capabilities that humans have. Right. Or so that’s like language, mathematics, those type of things. So it’s just a study, a computer science looking at mimicking human intelligence. So that’s how I generally think about AI.

Jennifer Evans: 
Anyone got anything to add to that?

Dan Williamson, Director of AI at Ryan Companies: 
I mean I think that’s a fantastic definition. I would just layer in sort of why it’s an interesting time to be talking about AI. And from my perspective, what OpenAI has done with ChatGPT by show of hands. How many people in the audience have used ChatGPT? Virtually everybody. I mean, in less than two years, right. You’ve seen a massive adoption of a use case for for AI in generative, learning. And that, to me is kind of why it’s such an interesting time. So a couple numbers, 280 billion over the last five years has been invested in AI by VCs. 13 billion was Microsoft’s investment. And we think close to one and a half to 3 trillion will be the impact on GDP. So it’s not the metaverse. There’s something real here. And the use cases that we’re going to talk about today hopefully sort of show path to boost and productivity for individuals in this room and for organizations that are using it the right way.

Stacy Thorwart: 
Yeah. And I would I would just add to that to when we think about, I think a lot of us because we’re using tools like ChatGPT now, it almost has become synonymous with AI. So when you think of AI, you think of ChatGPT, but there’s so many other things and capabilities beyond just ChatGPT. So ChatGPT is kind of like a great gateway drug into the the world of AI and a framework that has helped me think about AI and how ChatGPT relates to that broader, funnel or spectrum that you were describing is kind of thinking of, tools like ChatGPT and other tools like image generators and music generators. Those are all parts of generative AI, which in some ways I like to think of it as like the creative cousin of the AI family. So Gen generative AI is really good at some of those tasks that we think of from creative professions. So things like writing and, image generating, video generation and music generation. So that’s just kind of a helpful analogy of how you can think about it. Beyond just ChatGPT as one tool.

Jennifer Evans: 
So would love to know what is in your AI toolbox. What are you guys currently using? I’m curious both personally and professionally. Ooh, I want all of you to answer. Stacey, I want to pick on you first.

Stacy Thorwart:
Okay. Yeah. It’s, a tough question to answer succinctly, because I’m constantly trying and experimenting with different tools. But as an interior designer, some of the tools that I have found really helpful to other design professionals, including myself, are, Midjourney is my go to for image generation. You may have heard of that or you may have heard of there’s some other popular ones out there like Stable Diffusion and Dall-E. But Midjourney, I find that the quality and that just the accessibility of the tool is kind of superior to some of the others that are out there. I really like another one called Prome AI, which is fantastic at turning basic hand sketches into compelling renderings very quickly and easily. So I love that one as well. And then another one that I just like to have a lot of fun with is called immersive AI, which takes any static image and can create 3D and movement assets from it very quickly and easily. So that’s been a great way to add some dimension to static imagery. So those are a few of my my favorites from an interior design standpoint. And then certainly others like chatbot and Gemini and ones that you’re probably familiar with.

Dan Williamson: 
Another great response I’ll sort of break my head to to at the enterprise level. I think we’ve done a really good job of using machine learning and building prediction models off the back of our data infrastructure. So for us, that’s a great use case where we can bring forward answers to clients about, you know, what are what’s happening across marketplace, what’s normal in the marketplace based off of a large set of structured data that we have in our organization across legal documents. So that’s been a fantastic use case. And I think going back to the gen AI layer, the ability to now sort of like prompt it in a, in a very human way and get those answers is another interesting advancement of kind of those pillars that we’ve built over the years. So I see that at the enterprise level kind of coming to fruition right now, which is which is very interesting. We’ve got some people in the audience that we’ve brought over recently to, you know, to help us intersect between the business and the technology elements of the the platform. And I think there’s some really interesting use cases in terms of prompting the marketplace and creating more engagement across the marketplace. Personally, my favorite like, consumer product that I didn’t think was going to pull me into an AI loop, are these meta smart glasses. I got them back. I think I got I got them off the, of the sort of recognition that the Bluetooth audio is fantastic. And, that was what pulled me in. But as I started wearing them, you know, cycling and riding and actually not having like AirPods in and taking phone calls, but as soon started doing is prompting, you know, hey, meta, turn the volume up. Hey, meta, change the song. And I quickly add this epiphany that, wow, this is like, this is the way that I’m going to get pulled into the the AI assistant in a world. So I would you know, my big bold prediction is this holiday season will be a big eye opener for people as this gets, you know, deployed at scale. And Meta’s AI assistant will be ahead of the curve in terms of adoption.

Patrick McGrath:
I love it. I want to build off of that from the maybe not as business standpoint, but personal standpoint. So, the voice side of like ChatGPT or Gemini released that as well on the Google side. But being able to talk and have your, that assistant act is just a thought partner. And so I’ll go on a walk and just have it on in my headphones and say, hey, what do you think about this idea for a presentation? And, you know, it gives you a response back, but just using it as a thought partner to work through some ideas, and then you can come back to your desk, and you have the whole transcript right there, what you worked through, if it gave you bullet points or if so, I actually like engaging with it. It sounds we hear this like you’re a little, partner, but that’s, there’s a really good book that Ethan Monroe called intelligence that basically says treat the eyes of the generative AI as like humans, even though that’s the most taboo thing you can do is try to treat them like they’re a human. But it’s they’re trained on human language. And so if you treat them that way, they respond really well to get what you’re at. If as long as you understand they’re just a statistics text prediction machine, they’re not anything more than that. On the business side, there’s a whole bunch of things that we’re working on with the generative AI side, trying to give people, preferred route to leverage that for productivity gains, whether that’s, transcribing general transcription of meetings, being able to summarize, summarize text or do general document analytics. So look at unstructured documents, PDFs. So Ryan does a lot of we’re a builder and a real estate developer. So we have a lot of contracts. And so being able to leverage document extracting formation out of documents is really huge. And the more like really exciting stuff. That’s not your typical generative AI, but it’s more machine vision or computer vision, which is a theme or a flavor of AI. So we are working with, some basset dynamic robots to try and walk our job sites, to be able to use computer vision to try to understand what’s going on in the construction of the site, to understand how much of our schedule is actually complete. So we’re trying to investigate those use cases as well that are using that, you know, your which why we’re all here because of ChatGPT. But leverage can we start to augment some of our capabilities in the field. I’m using this technology as well. So those are things that are we’re exploring in our toolkit.

Jennifer Evans:
Fascinating. So each of you work in or with organizations exploring AI and its adoption. What are some of the roadblocks or challenges that you are currently facing and could expect to face as organizations are adopting AI technology?

Dan Williamson: 
I’m happy to start. I mean, it’s a moment for AI, so there’s a lot of hype. There’s a lot I could talk about here, but I think just generally the trough of disillusionment. Right. So you’re hearing all these tools and interesting things that you might want to go spend some time and learn about. When you get the response back and it’s wrong, that’s a problem. Right? Or I had a use case where I asked ChatGPT for something, it came back and it had a quote in it, and I was like, that is just the perfect quote. And before I took the quote and put it into something else, I ran the quote through Google and it was misappropriated. Who the author was. So you got to be careful. And I think I heard the Microsoft CTO describe the output of ChatGPT responses. It’s kind of like a C minus. So the junior in high school, which by the way, at scale is a game changer. But at the same time, if I’m going into a client meeting and I’m presenting C minus work from a junior in high school, I’m getting fired. So you know that, like that is a tension point right now. And I think we’re going to have to navigate that. I mean, I think we’re probably going to talk about it, on a different, you know, question. But there’s also there’s some real legality concerns right around kind of who owns the content. There’s definitely jurisdictional concerns. It’s a governmental level. Right. Like we proposed to ban TikTok. Why? Because it’s, you know, deployed to 140 million Americans and it’s potentially training an enemy state. AI. So there are some real sort of big at scale issues. Organizationally, I think we have to sort of balance the tension point of, you know, our people need productivity boost. They’re going to use these tools. So we need to find a path to give them a productive way to do that with those concerns in mind. So I think that’s that’s where we’re trying to strike the right balance.

Stacy Thorwart: 
Yeah. So from an architecture and design firm perspective, I think so I’m seeing it from sort of two different paths emerge. So some of the smaller, more nimble firms, I think we’re starting to see a higher and faster adoption rate of AI technology inside of smaller, more residential focused firms that can be a little bit more nimble. And there’s more of a permission based culture to experiment and try new things. And interestingly, I’m seeing less of that with the larger and more global international design firms, which in a lot of ways, I don’t think it’s surprising because obviously the larger firms, there’s higher stakes involved and there’s a lot of sensitive information. There’s a lot of concerns around, not only from a legal perspective, but also from a copyright infringement perspective. And so because of that, I think the larger organizations have been really quick, from a legal perspective, to put mandates in place around we don’t want you using these tools to your to their employees. We have a dedicated team of a handful of individuals that have been tasked with figuring out AI for the whole organization. And so because of that, I think you’re seeing the speed of adoption is suffering in some of the organizations because there’s just been these blanket mandates put in place, which in some cases for good reason. But I think that is definitely a roadblock to as we think about how these tools will get adopted at scale and potentially missed opportunity for individuals to experiment and figure out some interesting use cases that could drive not only efficiencies but the creative process further. And so I think it’s going to be interesting to see how that that plays out, where how we get to a point where there’s a little bit more permission for innovation and exploration, while also balancing the real concerns around privacy and copyright.

Patrick McGrath: 
Yeah, those are great answers. I think where I’ve seen a lot of struggles happen are generally around permissions or I would say confidence and uncertainty of what’s going on in that space, not just from a lack of education. I think our understanding of what’s going on and what we’ve kind of run into are, I see a lot of places is a gap in people that know what’s going on or want to tinker with the space and leaders giving permission. And so I think there’s a need to educate leadership at your company on what is the pathway to doing these things. I had to sit down with our CEO and had a conversation. If he he knew that he didn’t know the like how to get from A to B, but that’s why I’m in a role, is to figure out how to get from A to B, because he knows it’s important that we figure that out. And so there’s a gap in understanding, I think, in some leaderships realms and how to get there. And I think it’s in your companies to find a way to get someone that can educate leadership and provide a strategic direction on what is good for your company to do so. The like leadership education thing is a big part of a barrier to entry, and that’s one that we’ve struggled with that I think we’re making some good progress on the other aspect of it that is tough for some people to understand is what is the cost of this, right? What is the cost of it? I mean, it sounds, you know, pro $25 a month per user or per year if you, you know, whatever that is that looks small, that looks like small numbers, you scale that out to the size of your company. I think I did that at our company. It was like a three quarter million dollar a year investment. That’s not a small decision to be made. And doing that, we looked at some other tools. You know, we don’t know what the consumption costs. So when you’re paying for AI, you’re paying for data to be processed. That’s processed somewhere in a data center, that you’re paying cost for it. It’s not easily known what those costs are going to be back to your business. And so that’s another barrier that I think we’ve really, from our technology team side, trying to understand what is the business value that we’re driving and can we justify the cost of running those algorithms more or less, or giving people permission to use these tools? And just to elaborate on the like, cost of a pro license for something or an enterprise license, we want to make sure that if we give a license for, let’s say it’s ChatGPT to enterprise, if that’s what we’re going to do, which is not cheap, per user, we want to make sure they’re trained to use it. Well, right. We don’t want to just give someone a Ferrari and say, okay, I hope you can drive that Ferrari. Well, we want to teach you how to drive that, how it’s meant to be driven. So those are things that we’re seeing as challenges are barriers that we’re trying to figure out.

Jennifer Evans: 
Sure. So Dan, you have a AI in your title. So I mean, you’re working for an organization that obviously is embracing this and has you employed to explore that on their behalf. And same thing with you, Patrick. I mean, you’re on behalf of your organization exploring or do you see that this is a future job position? I mean, should organizations potentially start looking at either internally staffing or externally contracting with someone that can steer them in that right direction? Because there is a lot to navigate to your point cost, right? What is that cost and what’s the value of having a human do it versus I do it. And then even just the legality of, you know, all of the issues that we’ve been discussing, it seems like a huge thing for a lot of small organizations to even begin to tackle. So what do you kind of see around that? I’ll.

Patrick McGrath: 
I’m super biased in this question because this is in my role. So, I think, yes, I think that in some capacity, whether it’s an individual, a team or a council, that you should, in my opinion, have people that are focused on trying to understand the landscape and provide direction back to your company. Before, you know, my role came out of going through an enterprise data strategy where we put together what an AI readiness roadmap would look like for us. That’s an 18 to 36 month roadmap that involves some personnel and organizational changes that essentially gave the pathway for a role to go investigate this. That’s our investment. Knowing that we’re going to figure this out, knowing that we don’t have to figure it out completely. And I think that that’s important to do is to make that decision that we’re going to give people permission to go figure it out and what it means for their business and their company, because it’s going to be different for everyone. Whatever your policies are, what you want to choose to invest, what problems you go after. But it’s coming. And I haven’t heard in the last 18 months anyone say that it’s going away in any capacity? And I only hear people say, if your company’s not using AI right now, or if you’re not investing in AI in ten years, you’re going to be in jeopardy. If you haven’t invested in some capacity in it in your business. So whether that’s a council, whether that’s a role, there are ways to after it. But I think you should address it and I think you should give it some attention. Also, I am super biased because it’s in my role.

Dan Williamson: 
Yeah, I agree, I think I think the capability is on some level, table stakes for organizations. I think that maybe a different twist on the question is where does that capability sit organizationally? Does it sit in the technology department? Is it a support function for your organization? Is it meant to sort of, take your support function forward from a productivity perspective, or is it a go to market strategy for your organization? And are you really focused on sort of developing something that’s different that you’re bringing to market? I know our organization. I’ve been with it for over two decades, for better or for worse. And, you know, I’ve had the technology see where I was running and overseeing all the investment. I think for us, we are a property services company, were people company. We’re people centric company. So our technology is really intended to give those deal teams the max productivity boost as they go to market. And from my perspective, I sort of sort of sits in the technology part of our functional platform with the intention of giving a productivity boost to the front line versus us being like AI company or a technology company, or just it’s just not who we are. But I do think there are, you know, as you look at organizations more broadly, there are probably some common threads in terms of what is very valuable to AI. And I think our data is incredibly valuable to AI. And going back to kind of where you draw the lines, like it can be a little bit confusing. Who owns what data in some of these tools. So until that’s really sort of hashed out, I think you’re probably going to have continued reluctance in Edwin’s to legal teams and executive teams, because they do realize, even if you’re a people centric company, that the data is very, very valuable. And training the AI is very, very valuable. Like you need experts to train these algorithms and predictive models. And we have the experts. So who owns the prediction model. So I think those are, you know, some interesting sort of strategic tensions in the marketplace. But yes, I agree you should have a job.

Jennifer Evans:
Many are resistant to change. And this feels very scary because I’m still trying to persuade a few of my organizations to use teams chat not feels like a scary thing for us. The whole, won’t name names, but, anyways, yeah, how can we start to have these conversations? Or how have you seen them be successful when you’re, you know, not maybe the person of influence, but you want to start having the conversation within your management team with, your superiors around adoption. Introduction. Let me play with that. You know, I have used the in ways that have felt successful. Stacey, I’m looking at youth on us started.

Stacy Thorwart:
Yeah, yeah, I think I mean, it definitely depends on the culture of the organization that you work for. Right. But I think at a minimum, it sometimes it’s better for others to see it in action. So instead of trying to explain what it can do, if you have actual use cases where it’s been helpful and sharing how you’re using it, as I think about, you know, my the company that I work for, Steelcase, has a lot of permissions around what we can and can’t use it for, but just simply peer seeing how. Using the tools. And so showing by example, I think is a great way to get to start to build that use case. And the more people that start to use it and interact with it, the louder that voice is going to be and you’re going to be in a better position to have that conversation with at a more senior level within your company. So start small. Don’t be afraid. I’m seeing a lot of companies to, roll out pilots. So pilots are another great way. It’s low risk where before rolling it out to the whole organization, find out who those early adopters are. Hopefully within that pilot, you’re going to have a diverse group of individuals from inside the organization so you can get diverse perspectives. And starting with a pilot is a low risk way to build up to broader adoption. Across the the organization.

Jennifer Evans:
Looks great. Any other thoughts to add?

Patrick McGrath:
Yeah, I think what you’re talking about to me, and what I understand from that is, you know, have a plan. Don’t just ask, hey, can I do the thing but has a little bit of, you know, plan of what you’re trying to explore before just saying, I want to tinker with it. Although tinkering is really valuable to do as well, but have a little bit of a plan when you’re talking to leadership of, hey, I want to go try to use this tool to do X, Y, and Z, and this is what I think it’s going to accomplish for it. And these are the problems I’m trying to solve. So I have like a little bit of a plan. Instead of just saying we’ve had some people ask us, hey, I want to try this tool. Can you turn it on for me? I go, why, what are you doing? You know, like, what are you doing for that? So if you have a little when they come to me and they have a little bit more robust, like I’m trying to solve this business problem, it’s going to cost like this much money. And it’s not always about cost. It’s more like, what are you trying to solve? I can partner and help solution with them versus just asking to turn a thing on. And so but also giving permission for the tinkering. There are ways to safely allow people the guidance to tinker that that is also super helpful to do.

Jennifer Evans:
Dan, anything to add with that?

Dan Williamson:
I mean, I’m a visual learner. So the idea that you, you know, show me how it works is critically important. The only thing I would add to that is use technology to your advantage. Right. So if you do a prerecorded demo that allows you a little more scale in terms of how you’re showing your use case. So some strategies like that I also wouldn’t underestimate in our organization we were you know, we’re competitive organization. And there is a scarcity element. There’s kind of like being invited to the cloud in terms of the the team that gets to test pilot things in it. I would use that to your advantage as well to create momentum and energy around, you know, the social currency associated with having these tools. And I think that’s a that’s an important element that often gets overlooked when you’re doing the plan. It’s I mean, everybody has everybody. But most organizations have a plan. They have a committee. But it’s, you know, how do you really create the buzz? How do you make it viral? How do you how do you get people to it’s from a it is but has to be a good use case. Otherwise nobody you know, they come in and they’re like, not going to use that.

Jennifer Evans:
Stacy I’m actually going to pick your brain on this for the creatives in the audience. There is that thought of junior designers using these tools. Are they really learning the skills? And then just even designer or creative work, creative output in general? How original is that? If I’m really using AI to create that for me. So I would love for you to kind of play off of that. And and your thoughts on that.

Stacy Thorwart:
Yeah, it’s those concerns are very valid. And I never like to undermine them because I get it as a creative individual myself, I understand those fears of seeing, you know, an AI generate an image that is looks like it is something real. When it’s not, there’s a reaction from creatives to say, but that’s not real. I’ve spent, you know, decades of my career building up to a point to be able to have real pieces like this in my portfolio. And now you can do it in minutes with a simple text prompt. So I completely understand that concern. But I think the a different way to approach it is to think about all of the other ways that we are seeking out inspiration already today as creative individuals. Like if I pulled the room of designers and architects, I’m sure everyone of you in your portfolio has used precedent imagery from whether that’s Pinterest or Google search or other projects. It’s a part of what we have done for decades. As a creative class, we find inspiration in countless ways, whether that’s, you know, going for a walk or using a tool like Pinterest or hand sketching. So I think we need to switch the conversation of instead of thinking of it as a replacement. It is another tool in your toolbox that you can tap into to tap into inspiration. And I think it’s really important that you understand you have the right mindset when you’re engaging with it. If you’re expecting from these image generators a perfect rendering that can, you know, have every detail of specification in it the same way that you, you traditionally do with other design tools like Revit and then outsourcing it for photorealistic renderings. It’s you can’t think of it from that perspective. You need to think of it instead as, you know it is. It’s not real imagery, right? So it’s trained on real images, but you might get an image back with a window on the ceiling or a door in a very funny place. And at that point, it’s very easy to say this isn’t relevant. I’m not going to use this. So this is garbage. But instead, if you could look at it from wow, I would have never expected. I’ve never seen a window on a ceiling and that way, or whatever that object is. So it’s it can become this very interesting way to explore options that are not limited to how we traditionally think about space or image and images that we would traditionally C and a tool like Pinterest or Google Image Search. So I think there’s so much potential there. It’s all around mindset and understanding that it is in no way a replacement for formal training that’s required to become a professional designer and an architect. But it can be this wonderful tool to help augment that creative process.

Jennifer Evans:
That’s great. That’s great love to just start with some questions so we can get some thoughts from you guys and see what you guys really want to hear from the audience. Hey guys. One of the things that if you’re doing any research or getting started with AI, you read a lot about the lack of humanization of the process, right? And we talked about how quickly people are gravitating towards using AI. And like two years, people are using it like Google and it’s really fast. I’m noticing like one of my terms is I’ll get an email from someone that I work with for ten years, and I’ll look at the email and I’ll go, wait a minute, I just got AI. This is not normal. Like what is going on here? And it’s a great email, but that human aspect of that individual has been removed from it because I know you and you don’t talk like that. Right? So my question is, if two years is all it takes to get people to use it, how quickly do you think just that behavior is going to change? Like what’s the rate of people using it to not feel like it’s not caught? You know? You know what I’m saying? Like, how quickly do you feel like the behavior is going to change with the rapid pace of people using the technology?

Stacy Thorwart:
Yeah, I can speak to that briefly. So I think that’s it’s a great example because a lot of us have experience that. Right. It’s it’s very jarring at first when it happens. I think a lot of that is just simply where the technology is today. If we think about a lot of these tools, they’re still very homogenous, right? So where they’re not personalized to you, that’s going to change extremely rapidly. Where I mean, the tech already exists. You’re seeing it in tools like ChatGPT already, where it has a memory feature. So the more you talk to it, the more it gets to know you about your brand, about your professional background. You can update that memory. If there’s facts that you want it to remember or not remember if it was just a one off conversation. So as the tech gets better and better, which it will, what is going to be interesting is that that email will start to, in theory, sound more like the person because the technology is actually going to know more about you. So you’re going to see more of that. It’s going to feel more authentic because it’s going to be less homogenous and more unique to the person that’s interacting with it.

Dan Williamson:
I mean, I feel prompted to sort of take a little bit of a contrarian view. I mean, I think for sure it’s coming to question how long will it take? I’m always surprised at where you see jurisdiction start and end in some of these journeys, like Uber was the best thing ever for the user, right? Like it worked magically, the car just showed up and knew, you know, it told you when it was going to show up. It worked with your wallet. But you know, the taxi industry was a pretty significant stakeholder group that was being impacted. And there’s a tax base associated with that. So, you know, you started to see governments step in and slow that adoption rate down. And when when you talk about somebody’s voice or somebody’s identity or even the creative engine that we’re talking about and sort of like, what are the ethics of the mood board that was inspired by creators that haven’t gotten attribution or credit for the creation process? I mean, that is something that we’re going to have to hash out as a, as a community, and it’s going to take some time. We talked a little bit about it last night. It’s like it’s a little bit of feels that some, some version of this feels like a Napster type moment. You know, the music industry went through it, and I think the creative community is going to go through it for sure. So yeah, I’d say it might slow down a little bit before it really picks back up again. For those structural issues that need to still be worked out.

Jennifer Evans:
I think we’re all super excited to see where AI takes our industry. And I also recognize in our industry, there’s a lot of focus on sustainability happening at the same time, and we’re starting to hear more about the power consumption related to AI. How do you see companies, reconciling both meets the needs to keep up with AI, but also to stay on their sustainability goals?

Patrick McGrath:
I didn’t talk a little bit to that because my company build some of them, and it’s a challenge. They consume a ton of energy. Right. And so it’s a challenge. Those companies building those data centers do have that in mind. And it’s ultimately going to be either regulation that helps force them into that. Like need to do more sustainability on those. But you look at some of the providers, they’re already advertising net zero data center, you know, net zero, cloud provider, net zero whatever. Now how are they making net zero happen in the reality? That is a question. Right. And some of that is it how they’re procuring their energy. Is it they’re buying a bunch of trees somewhere. You know, how are they actually procuring net zero on that. But it’s definitely in mind. A lot of these data center builders are trying to find where they can find the best access to energy, and that it’s also going to be an efficiency gain when people designing more efficient chips that go inside the data center, that make them, make them run. So it’s it’s a challenge. It’s more on the energy consumption side than it is, like the building itself. I think it’s a challenge, though. And can we make algorithms more efficient and can we just decentralize where the compute has to happen? And there’s a lot of technical things that are going to have to happen. I don’t want to say data centers consume. I wish I had the actual numbers, but it’s it’s insane amount of energy that they consume compared to the rest of industries. So it’s something we have to figure out. But those clients, and when we are invested in trying to figure that out because it’s in their best interest as well.

Stacy Thorwart:
Yeah, I think about two different things. Number one, I think it’s we need to make sure there’s a public awareness around it, because if there’s a lack of awareness around how the the actual results of the energy consumption that’s happening behind the scenes, if not if people aren’t aware of it, then there’s not going to be, I think, a loud enough voice to make sure that there’s responsible adoption and that the technology moves forward in a more sustainable way. I would also say if we we think about if we take a step back, a lot of this is relatively new. And so hopefully we’ll start to see a progression. Like other industries, if I think about the furniture industry of where our practices were even a decade ago from a sustainability perspective, and with education and understanding more about the impact where we are today as an organization. So I, I’m cautiously optimistic that as the technology improves, that it will improve in the right direction. And that also some in a lot of cases, the more sustainable approach can also have, savings and from a profit perspective. So there’s also a profit, motivation to figure out ways to do this more sustainably. And that is a great thing as well, because then it’s a win win.  And there’s not the concern around a trade off with that.

Patrick McGrath: 
If you look at when people talk to the AI right now, they’re those are based on large language models that take a ton of energy and money to train. And there’s only a very few companies that are has the capacity to do that. So Google, Microsoft, OpenAI, which is Microsoft, Amazon was anthropic, right? There’s not meta training in open source There’s not many companies that can afford to trade a high performing foundation or frontier models. What they’re generally called. And so we’re going to rely on those people to most likely continue to make advancements. And there always be a myriad of them until we get the cost of training down. If we if people can do that. Right. And then what people are doing beyond that is, is what are you building on top of that foundational model. So a lot of you do startups or other small companies that are leveraging AI in their product, they’re going back to a source model in some capacity, one of those 4 or 5 providers of them. And then there’s building customizations on top of that. So who’s leading in that industry? It feels like every week is a different one. So but it’s it’s one of those I would say larger companies that can afford to train a model.

Stacy Thorwart: 
And to your point, it really does feel like an arms race right now. And there’s a lot of, strategy involved of like if you start to follow these things of like, Google will come out with their big announcement right at the time that Apple was about to release their big AI announcement. And then meta will show up with their announcement. So it’s it really feels like there’s an arms race right now of who is going to keep that front runner position. And I think it’s time will tell that we don’t know who’s ultimately going to win out and who who’s going to be the winner in the AI race. I think it’s unfolding right now as we speak.

Dan Williamson: 
I gave you I gave him my my pick for the next, you know, 18 months, I think meta, with the integration of the physical hardware on your face is making a big jump right now. I think Google is underestimated. They’ve been at it for a long time. OpenAI kind of stole the thunder. But, the cost to compute going back to some of the things we talked about, I think the cost to compute for ChatGPT is like 1000 times more expensive than a Google search. So Google definitely has a lot of capacity and wherewithal to do it. I sort of got in my head that, you know, like a Warby Parker Google thing will come together at some point. So I was pitching that last night.

Patrick McGrath
I do think on that question though, my personal opinion is it’s not necessarily important, like which one wins because of the existing capabilities from the top three or top five right now for our industry to be able to deploy, that is good enough for the most part, you know, to start losing right now, if you use Gemini versus ChatGPT versus if out of the box, or maybe some user experience things that are different, but the general capabilities, you’re not going to like when anthropic announces a new one, you know, like, oh my God, I got to switch to this new one because they’re winning. Those are going to be so minor. And the use cases, the people that are really wanting those are the people building applications that you’re then consuming on. They may be that enables them a new feature set or a new reliability. But as the general consumer of it, to me, those battles are somewhat indifferent unless you’re investing in it. Like maybe this guy is over here.

Dan Williamson: 
Hello. So Stacey spoke about fitting images to or sketches to prom AI to generate images. I’ll care for you when you’re fitting these images. Like are you scared to perhaps or design might show up to somebody else, you know, when they’re putting, tech simple and then they get an image off of your cat, your sketch or something like that, or even just fitting ideas how care for you if that’s confidential. You know, if you’re speaking to a client that has confidential, stuff, and I’m sure that we have to be careful with that stuff. But I want to hear from you guys.

Stacy Thorwart: 
Yeah, it’s a great question. I’m glad you brought it up, because I think that’s a concern that a lot of us are facing right now. It’s a real one. So I can give a real life example of an AI image that I had generated that literally went viral. And I was on Instagram scrolling, and I see a company that I know nothing about with the image that I generated and using some of my marketing material, they were using it all over their website and different, marketing material that they put out with no credit or reference back to where it came from. And, you know, the initial reaction is, well, I created that, but I didn’t really create that. And I created that. Right. And that is also how in the US, US copyright law as it stands today, you cannot copyright an AI generated image. So you can copyright obviously an image created in your portfolio that is a real life, you know, something that you designed. But if you generate something with AI, you cannot copyright that image. So I don’t own that image. Under U.S. copyright law, the AI image generator produced it, not the individual. That might change over time, but I think just being aware of that because a lot of people aren’t aware of that. And then the tools are tricky to like. If you’ve used a tool like Midjourney or a lot of the other image generators out there, it can seem like you’re engaging in their sort of private image generation realm. All of those images are being fed right into public, accessible websites. So any image that you generate with Midjourney as an example, under their basic plan, it goes right up into their web gallery. And by a simple search you can see any image that was created. You can also see the exact prompt that was created, used to create it and the creator that created it. And a lot of people don’t realize that, that that’s the case. And usually that’s the default. Kind of like with our cell phones, unless you go into the settings and really drill down to turn off some of the permissions. A lot of the default is that when you generate these things, that is out into the public. So I think number one, just being aware of that. And then number two, if it is a sensitive case where you don’t want those images publicly available, find a tool that does have more, protection around it. For Midjourney as an example, for their pro plan, which is like five times the cost of their basic plan, you can have those privacy settings turned on. So I think especially in like a commercial setting where, say, you’re generating an image that needs to be proprietary for your client, really understand the terms and conditions of that tool that your organization is allowing you to use for that project and make sure that you’re, you know, adjusting the permission sets accordingly. So that’s where we start to really see a lot of concerns around privacy. So I think, the larger organizations are standing up their own instances of tools like ChatGPT where it looks and feels just like regular ChatGPT, but it is private, too, that organizations. So the information that you put into it is not going to leak out. So larger organizations are starting to invest in that type of technology. But for the smaller organizations that maybe are just using those generic tools, that’s where that that coaching into your example of making sure if you’re going to give the keys to the Lamborghini that the team knows what is okay to share and what’s not okay to share. So making sure there’s some degree of of training as these things are rolled out?

Patrick McGrath:
Yeah, I think the, the least sexy part of my job is reading terms and services and data privacy things. Those are incredibly important. And that’s why I said earlier to put some counsel or people in a role to figure and to do those things, because those are the important things to say. Now we feel confident to endorse you doing these things with this tool, because we’ve read that if you had to have everyone in your company read the terms of service, they shouldn’t do it and you shouldn’t have them do it. You should have some folks that are figuring out what is the right capabilities, waste of data protections that we are confident in and then we want to go use. You know, we have some clients that won’t even let us put data in a place, because the data is not where they prove it to you. So if we endorse that, say we endorse ChatGPT enterprise, which is going to give us the security that we want their clients because we know that’s on an Azure server, most likely that we cannot use that product at all with those clients, even if it’s quote unquote secure to our instance. So there’s just a there’s a lot of nuance to that space. But there are ways that you mentioned, Stacy, to go confidently be secure in your data. And I think it’s mainly the larger organizations that have a little bit more stringent things they need to pay attention to. But I would, for a fun Saturday read, open up in terms of service and and see. See what they’re saying in there. But, if you have to also understand, like any free tool, if it’s free, you’re the product. So just know that as well.

Jennifer Evans: 
Seems like a best practice. Whether an organization wants to adopt AI or not is that they should have something documented around that use, right? I mean, would you guys agree? Because even if I small company don’t use it, no one can use it. You really don’t have control on that one individual that’s going to start feeding ChatGPT or other, you know, tools, information about the organization. So I mean, is that something you’re also talking to clients about and, and others about like, hey, even if you’re scared of it, not going to use it, it’s the best practice to have something documented that you’re pushing out to your employees around the use.

Dan Williamson: 
Yeah, I mean, we we definitely all it’s our employees and her clients to let them know where the guardrails are. So that that is just an obligation of yeah, any leadership team. So we’ve we’ve tried to do that and we’ve also tried to stand up our own, you know, enterprise version where you can actually engage with it and try out, you know, different use cases on your own, because we don’t necessarily think that the central brain is going to figure out all the best use cases. So we got to get the we got to get the, you know, the field involved and engaged. But to your point, you know, getting into that group requires some upfront acknowledgment of what’s right. Sort of got, you know, those guardrails that we put out at the same time, you know, it’s a tricky time because people are definitely going to use it in their own personal, personal lives. And there’s probably going to be some situations where they get unanticipated negative response and sort of move the conversation into the challenges of this, of this ecosystem right now, because it is still very much on the frontier. But I guess I’ll sort of circle back, like we wouldn’t be having these conversations if we didn’t think there was real promise in terms of where it’s headed and how it’s going to impact every, every industry and all of us individually.

Patrick McGrath: 
I think, I think one of the best things you can do to try and prevent inappropriate use of the tools made at inappropriate just security issues is educate your staff on what you’re endorsing them to do. So we have an entire AI curriculum that we’re going to roll out to everyone at our company to be able to feel more confident, like, yes, we do have a policy that says, but we can’t monitor everyone’s use on their phone or whatever. We’re not gonna be able to do that. So the more that we can educate them is our better answer to our guardrails. And hopefully you’re using it this way. You know, we had a I had other examples of somebody asking, what’s the cost per square foot for brick in, let’s just say San Diego for today’s say great. You should not use it for that, that prompt. That’s that’s not an answer. You should expect to get out of a tool. And so educating people and that’s that’s not like a good use case of the tool is also helpful of knowing what you’re going to get out of it. And so a policy paired with educating staff I think is really, really important, to do to try and confront bad, good uses of the technology.

Jennifer Evans: 
I have a question in kind of back to the copyright. It’s kind of related to the portfolio question too. So, I mean, right now that would be saying that someone’s portfolio isn’t technically theirs if it’s created with AI. So I’m curious each. Of your personal opinions, if you think that that law should shift so that you can copyright.

Dan Williamson: 
I’ll jump in a little bit. I mean, I think I would dig a little bit deeper, like the inspiration for the creative engine that delivered the output came from somewhere. So where’s the attribution to the somewhere? I think that needs to be resolved. That, I mean, that to me is an egregious sort of position that OpenAI has taken in the marketplace. And I don’t know, I’m not perfect on this, but I think there was also, sort of a business strategy that OpenAI had going out to the developer community. And basically saying, like, whatever the use case is, if this issue comes up, we’ll we’ll back you legally and fight it on your behalf, which just sort of gets that momentum going even further. Right? It’s like you can get further, further away from the original source of content and attribution. So to me, I don’t know that that’s a big that’s a big issue. And I think it’s I think it’s going to come to a head sooner rather than later. I mean, the OpenAI is kind of strategically going after the publishers and striking deals now where there is a, you know, contractual obligation between the two. So that seems to be their strategy. But there’s a whole subset of developers and creators and photographers, right, that are sort of being locked behind. Google did it back in the day, but their strategy was to sort of promote your promote you through the engine. Right? So it would send you to the source, but it was just a better way of getting to the source. OpenAI I just is not doing that at all. They’re just sort of creating this engine that and others, you know, but off the back of what company has done. So I don’t know, I think that’s a huge, huge fundamental issue that needs to be resolved.

Patrick McGrath: 
I think it’s also difficult to to solve. There’s the the way the model runs. We don’t have a lot of good technology yet. They’re coming out with the technology understood, like what happened in the black box to produce it. So even if you had 100 images and you can monetize back that those 100 images, that those were your 100 images that help produce the one image at the end. So you can say relatively that’s $0.01 per image. So I can send that back to the creator of those. Does it not have a lot of good technology? I understand which images were sourced to produce the outcome. So how do you monetize it back to the original content creator is still on the technology side, something that’s got to get figured out or just the structural model of how we pay that content creator has to be different due for every image that you give OpenAI to train their image generation model, do you get a dollar, $1 one time? I think like figuring out what it is from the data set because that’s the gold, right? Like how are they going to monetize or give date money back to the original creators of that information? Because it’s not just the image side of things on a coder side, and they’ve scraped all the forums and how to answer coding questions and like all the people that have made a code, that’s IP and how they solved problems as well, how do you give them? It is very tricky all around it is.

Dan Williamson: 
But there’s there’s some really interesting stuff happening on that front. I mean, just breaking down the data connection point is something that’s being discussed at scale. So there’s, you know, going back to like the early days in the web, there’s a, you know, snippet of code that allows for the search engines to crawl your site so that it’s relevant in the search results. I mean, people are removing that snippet of code because OpenAI’s extorted it for, you know, for scraping and sort of aggregating into the black box. So, I don’t know, I think this is that is like a major big issue and it’s got to be resolved. Otherwise you’re going to see the open internet become a closed internet where, you know, that access point is shut down at the point of creation.

Patrick McGrath: 
I mean, yeah, a lot of people were at paywalls. No, to start that.

Stacy Thorwart: 
I think it’s so important to just to have conversations like this, to build that awareness, because I think they’re in an ideal world. There’s both. Right? We want to be able to use these tools as creatives, but we also want to live in a world where you can still be a creative individual and be compensated for what you do as a creative. So it’s finding that balance. And I think it’s it doesn’t have to be a either or, but how to how can we get to a place where there’s more responsible use from these larger organizations? And a lot of that starts with awareness. It starts with more creatives understanding how these models were trained and then pushing for more legislature, to make sure that the future versions of these tools help protect the creatives that originally created the content. And we have a long way to go.

Jennifer Evans: 
In that in that regard.

Stacy Thorwart:
We’re not there yet.

Jennifer Evans: 
All right, final question. And it’s kind of a two part question. We talked a bit about the tools, what they are talked about, some of the things we would feed into the tools to create, that creative, rendering or output. But in general, what should we be thinking about to turn over to AI both personally and professionally? What kinds of tasks should we start looking at and think about? Maybe I could use AI instead. So two part question. So that’s the first. And then second in your respective industry where do you see this going. Like what’s the future. What’s you know ten years down the road on a start down there? If you Patrick.

Dan Williamson: 
Okay. So what I use this is terrible. But I had I had to apply to a, like a member club and, I used ChatGPT to sort of write the whole response, and then I edited it. That was a toil task, right? It was going to require time. Perfectly capable doing it. But it was a nice head start. So personally use that. I talked about the meta glasses. I think it’s interesting to prompt and have the cameras and be able to be in a different country and read a menu back in your own language. So that’s kind of interesting to me, and probably more to come there with, you know, the ability to see the outside world and incorporate that AI assistant. Within our company, I think I go back to like we, we are asked by clients to look at complicated problems that cover various different markets. They include, you know, hiring people. They include building capital stacks. And there’s just a lot of structured and unstructured data that needs to become structured data and once you have that, the prompt call and sort of your ability to interact with it just makes you way more efficient. Like if I go into, presentation with a company next door and I can literally, you know, ask like everything about this market that I need to go into that presentation and be smarter in that presentation, that’s a great use case. So I think over time we’re going to ultimately draw. We are going to do a good job as an organization. We draw the best people to our platform, and they’re going to expect our platform to be able to give them that productivity boost within the ecosystem that we play. So that’s that’s where I see it going for us.

Jennifer Evans: 
All right. Stacy?

Stacy Thorwart: 
Yeah. So I think about where do you start with it. Right. Think about the tasks that you don’t enjoy. Is it is a great lens to think about it through, you know, what are parts of your job that you’re like, oh, I don’t want to do this. And then get in the habit of asking yourself, is there an AI tool that could do this for me? And also are there things that maybe you enjoy or you want to know more about that you just don’t feel super confident in? I think it’s it some of these tools are great avenue to democratize certain fields. Like if you think about a field like interior design is not accessible to a lot of individuals because of the skills and the technology needed to be able to have a visual language. And so and that’s true not just for interior design, but think about like, if you’ve ever wanted to create music that’s never been easier to create music thanks to AI. And so think about those things that you’ve maybe been curious about, but don’t feel like you have that high degree of competency. And can you leverage AI to fill in some of those gaps in your skill set? And then as, as far as where I see it going, or actually, one more thing I would add to that to of ways to use it that I would love to see more people use it in this way. It’s one thing, I think to think of it as like a step one of how can you use it as like a creative collaborator or a thought, someone that you can bounce ideas off of a thought partner. But I also think there’s so much power in using it to help give an alternative point of view. So if you have an idea in mind or you’re getting ready to pitch a presentation to a client or whatever it is you might be doing or thinking through, ask it to provide an alternative point of view or provide that counter argument because it has such a powerful way of then opening and broadening your thinking. And as a society right now, I feel like there’s never been a better time to sort of take that mindset when engaging with the tools, and I think you’ll be very surprised by the outcome of instead of using it to sort of reinforce the ideas you are already bringing to it, how can you use it to broaden your your thinking? So that’s some ways I think are great places to start utilizing it. And then where I think it’s heading for our industry, where I get the most excited about is the direction, the ability to push design forward by freeing up some of those tedious tasks. So say like laying out a floor plan, right? As an example, if some of those things can be automated, the time that that frees up for us to do other things, to really push our creativity forward. But that’s where with that also comes the responsibility of making sure we protect that time. Like, there’s a great analogy of when the washing machine was first invented. Everybody thought they were going to have like all of these hours back in the day. And what did we do instead? We just started washing our clothes like ten times more than what they did pre washing machine. And so it sounds like a silly analogy, but I think we’re at a similar moment in time with AI. We there are going to be efficiency gains related to this. And there are things that are going to be automated. So how do we get ahead of this and think about how do we protect that time instead of just taking on more and more and more, how do we use that time in a way that can really help us produce better, better buildings, better spaces, better environments? So that’s where a little bit about where I think it’s it’s going.

Jennifer Evans: 
Well, it’s great for us. Dan?

Dan Williamson:  
Yeah. I would agree with starting at the task level and looking at tasks that you do. That’s I think a lot of people get hung up on AI is going to replace a job. A job is a lot of things, you know. And so if you think about, how can I replace the tasks that I do that I don’t enjoy doing again, to spend my time doing more, high value decision making? So if you can automate counting or automate analytics, you know, a lot of the things that we need to do with our buildings right now are unbelievably more complex than they were ten years ago. The amount of AV and buildings, the amount of sustainability constraints, the amount of material choices, the amount of, just other decisions that go into building way more complex than they were, even two that say, two decades ago. So if you can automate a lot of the capture of those things so that you’re spending more time making the creative decision and having that information at your hand is a much more better use of time. So look at the tasks. Yeah, that you don’t like doing. And is it a task not a job. Because we’re I think we’re always going to have the jobs that we have in our industry. We’re just not going to do the things that we don’t like doing. So that’s like a perspective that I tend to tell people is think about it that way, not think, is my job going to go away? Aspects of your job are just going to get better. I think as a starting point for a lot of people, it’s not just to sound like homework, but honestly reading co intelligence by is and Malik is get the audiobook, listen to it on one half speed and with through it. That gives his perspective that he writes about as a way to engage with generative AI just gives you a ton of different ideas to think about engaging. That is in my thought partner, is it a way that I can prepare for an interview better? What questions would I expect with this RFP? You know, I just those are not security sensitive questions to ask that, you know, there had to be a thought partner. What other things haven’t thought about our team in the field? Use it. Say generate a safety checklist for installing this type of work there. And we’re not going to use that as our only checklist. But it gave him 4 or 5 other ideas that he hadn’t thought about. And so there’s I highly recommend reading that book. I don’t get no kickback from that. I wish I would, but, it’s called Code Intelligence by Ethan Malik. It’s a it’s a really good read to give you the perspective on how to work with that credit array of AI tools out there. And just a different maybe ways in your thinking about, using it. I think we are going to see a lot more advancement. Ryan does a lot of construction. It’s a big part of our business. I think we’re going to see a lot more advancement in safety and computer vision and robotics to make job sites safer and more efficient. And there’s just a lot of things that we can do in that space. And the costs are coming down. And I think that that’s going to be really important if we can feed that data back into the design phase of things, that we can start to leverage that more. I think that digitization of our industry will make us learn a lot in the next decade about what that means for the types of decisions that we’re making. I don’t have, like a full clear on thing, but then that could be the way we build buildings needs to somewhat evolve because it’s still very slow. It’s dangerous, it’s risky. So if we can get more intelligent about those type of things, I think that frees us up to to move faster, because ultimately, the faster you can produce a building, the less risky you potentially, if you can confidently do it faster. That’s where I think we need to move because of how fast market shift supply chains change that be able to move faster is where I think we’re headed.

Jennifer Evans:
Awesome. Okay, wraps us up, you guys, thank you so much is a great conversation. And. Today’s conversation explored the complexities of the AI landscape. We had a great conversation digging into what AI is, how we as consumers can use it, and some of the barriers keeping us from using and adopting AI in our personal and business lives.