Open the Pod Bay Doors
Last week I kicked off this three-part series by talking about the AI hype that is sweeping through the facilities and asset management world, and why the quality of your data is the most fundamental issue being overlooked in all of the excitement. If you missed it, I encourage you to go back and read Part 1 before diving into this one.
Today I want to tackle the second critical issue: the role human beings play in the AI equation.
For those of you who have seen Stanley Kubrick’s 2001: A Space Odyssey, the title of this post will bring to mind one of the most iconic moments in cinema, the moment when astronaut Dave Bowman asks the ship’s AI, HAL 9000, to open the pod bay doors, and HAL refuses. It is a fictional scenario, of course, but it captures something that I think about a lot when I hear people talking about handing decision-making authority over to AI systems. At some point, if we are not careful, we stop being the ones in control.
I am not predicting a HAL 9000 scenario for facilities managers. But I do think there is a very real and very underappreciated risk in the way our industry is currently talking about AI — and that is the assumption that we should be turning our decision making over to AI.
AI is a Co-Pilot (unintentional Microsoft endorsement I suppose), Not the Captain
Let me be direct about something. At this point in time, AI is an extraordinarily powerful tool for analysis and decision support. It is not, and I would argue should not, be a decision maker. Not in our industry, and not in any industry. Not yet, and I would suggest not for a very long time.
The decisions that facility and asset managers make every day are rarely straightforward. They involve competing priorities, limited budgets, political considerations, and a depth of on-the-ground knowledge that no software platform fully captures.
When an AI tool analyzes your facility condition data and recommends that you defer the roof replacement at Building X in favor of the HVAC system at Building Y, that recommendation needs to go through a human filter. A human who knows that the roof at Building X is above a gymnasium that doubles as a community emergency shelter, or that the tenant in Building Y is about to vacate, or that the capital funding for a specific project is tied to a grant with a hard deadline.
AI tools can surface patterns and insights faster than any human analyst ever could. That is genuinely valuable. But the interpretation of those insights, the judgment calls, the communication to stakeholders, and ultimately the accountability for the decision, all of that still belongs to people.
One of the most vexing issues even for humans to sort out around facility and asset management decisions is small “p” politics. Any institutional owner, whether a school district, higher education institution, government agency or hospital has layers and layers of political considerations and nuances that need to be navigated when tackling any major facility decision. If humans haven’t got it figured out yet, there is no way an AI will be able to do it either.
What concerns me is the temptation, especially for organizations that are understaffed or under-resourced, to treat AI output as a final answer rather than a starting point for analysis. This is the facilities management equivalent of telling HAL to fly the ship and then taking a nap. The risk of over-relying on AI recommendations without proper human review is real, and in an industry where decisions can involve millions of dollars and directly impact the safety and well-being of building occupants, the consequences can be significant.
Just as we saw with the greenwashing era, where organizations slapped a green label on things without doing the hard work underneath, I am already seeing organizations treat “AI-powered” as a goal unto itself rather than a tool. The ones who will actually benefit from AI are the ones who build it into their decision-making process thoughtfully, not the ones who outsource their judgment to it.
The “Haves” in the AI era will not just be the organizations with the best data. They will be the organizations with the best data and the right people engaged in reviewing, interpreting, enhancing and acting on what their AI tools are telling them. The “Have Nots”, whether because of poor data, poor process, or misplaced trust in the technology, risk making faster, more confident, and ultimately worse decisions than they did before.
Are you and your team set up to be genuine critical reviewers of the AI recommendations coming out of your systems — or are you at risk of just hitting accept? The answer to that question is what separates organizations that will genuinely benefit from AI from those that won’t.
AI in facilities and asset management is real, it is here, and it is only going to become more capable and more prevalent. I am genuinely excited about what that means for our industry and for the organizations we serve. But as I said last week, excitement without clear thinking leads to poor investments and disappointing outcomes.
The technology is here to support your judgment. Not to replace it. Make sure the humans in your organization are still the ones opening the pod bay doors.
If you are ready to apply the experience and expertise of your human team to the AI-powered insights, but still aren’t sure what a “good” dataset looks like, come back next week, when I will wrap the series by focusing on what a consistent and defensible dataset looks like and how to develop a solid data strategy for your portfolio.



