In a 2026 episode of Emerging Voices Unite Live, host Nikki Estes interviewed Jonathon Chambless, the founder of LV8R Labs. The episode focused on what distinguishes AI built for enterprises from AI built for consumers.

Chambless described LV8R Labs as an enterprise software-as-a-service lab building AI tools for architecture, engineering, and construction. He listed construction management, real estate development, regulatory compliance, workflow automation, and what he called "digital twin infrastructure" as target areas.

He argued that the enterprise constraint is accountability: professional tools must fit the rules and controls that govern real work. In his description, enterprise deployments face hard requirements such as compliance, workflows, building permits, regulations, and submissions.

Those constraints change what “success” means. In consumer use, a tool can be useful even when outputs vary, because the user can decide whether to accept or ignore a result. In the episode’s framing of regulated workflows, incorrect specifications can halt work.

Enterprise AI: accountability and control


  • Jonathon Chambless frames the consumer-enterprise divide as accountability, not scale.
  • Compliance-bound workflows require specifications, documentation, and reviewable decisions.
  • Agentic systems raise stakes by running in loops and acting through controlled endpoints.
  • Permissions and data access are treated as core design inputs for enterprise AI.
  • Multimodal use and clear AI strategy alignment are presented as risk controls.
  • Digital twins and autonomy shift responsibility toward defined oversight and failure ownership.

Accountability is a design requirement


Chambless connected accountability to product design. He described "guardrails" as a requirement for a professional use case, not a feature that can be added after adoption.

He framed new AI tools as risk-bearing systems that should be treated with the same discipline used for other operational dependencies. He compared them to junior employees who need training, boundaries, and oversight before they can be trusted with critical work.

"It could cause risk. Think about junior employees in any business, usually your first hires is potentially your highest risk as they have the least amount of training and the least amount of experience."

He also warned that teams can mis-handle tools by assuming that the system is "the expert" instead of treating it as software operating under constraints.

A governance-oriented description of this problem appears in the NIST Artificial Intelligence Risk Management Framework, which organizes AI risk management into functions such as GOVERN, MAP, MEASURE, and MANAGE. In the framework, the GOVERN function includes "accountability structures" and documentation of roles and responsibilities for risk work across an organization.

The shared point is operational. Enterprise AI programs need named owners, documented responsibilities, and review paths for changes, because the system will affect work that must be explained later.

More Technology Articles

From chat interfaces to controlled endpoints


The episode treated agentic AI as a shift in how systems behave once deployed. Chambless said what is "new now" is "building a system in a loop," where a model or agent is tasked with a goal and continues taking steps until it either completes a task or fails.

He tied that looped behavior to stricter technical boundaries:

"The endpoint is here is a controlled environment where we're giving it access to APIs and access to data, right? But it has to be done under these strict guidelines, these strict enforcements."

He also described a practical adoption barrier: the systems that enable controlled access often require developer-level operation. "Most aren't really desirous to deal with API keys," he said, and he argued that this tolerance gap shapes what can be deployed safely in organizations.

A similar discipline is described in the NIST Secure Software Development Framework, which recommends integrating secure development practices across the software development life cycle. The document emphasizes outcomes such as defined requirements, protected development environments, and verification activities that can be communicated across teams and suppliers.

Chambless used the term "specification" when explaining why consumer patterns do not translate directly into enterprise settings. He said that organizations can see real performance gains when specifications are clearly defined, and he linked those gains to automation return on investment and workflow improvement.

"But again it all comes down to specification and alignment. And this is something you don't get just in a conversation with a frontier model."

He described alignment as a multi-layer problem. After prompt engineering and context engineering, he said, there is a third layer involving "the people in the organizational goals," and he warned that a tool having a task does not guarantee that the task aligns with an organization's goals or mission.

Data permissions, multimodal use, and policy boundaries


Chambless emphasized that enterprise risk is shaped by data access. He told listeners to understand the "data permission structure" in model settings and to treat those settings as part of responsible use.

He also recommended avoiding dependence on a single model or provider as a complete source of truth. Instead, he encouraged "multimodal frameworks" that combine services and models so teams can compare outputs and reduce concentration risk.

He extended the strategy point beyond tools, and added that every member of an organization needs an AI strategy. He described alignment between the organizational strategy and individual contributor strategy as a condition for stable adoption.

When asked what is difficult about AI infrastructure, Chambless answered "permissions" and returned to risk frameworks. He described permissions as the practical constraint that determines which systems an agent can reach and what it can do once it reaches them.

He also described a planning method he called "holistic outcome engineering." In his description, the first questions are about goals: "What are the goals of the organization? What are the goals of the project? What are the goals of the team members?" He then proposed creating an "endpoint spec" for each organizational node that lists risks, security requirements, and privacy requirements.

He explicitly differentiated casual use from organizational use. "If this is just consumers, if this is just a club or your friends, then the risk is lower," he said. "But if you're representing an organization," he added, "there needs to be a clear risk mitigation or risk management framework in place."

He also warned that skipping early security questions can transfer unresolved risk onto other teams. In his description, this is a predictable failure mode when powerful tools become widely accessible without matching controls.

Digital twins, autonomy, and responsibility


The episode also addressed digital twins and autonomy in the built environment. Chambless said that 3D models and digital twins used for construction could gain "more automation control" over the physical structure, and he said teams may "use a live 3D model used as a source of truth."

He framed this as a safety and risk management problem rather than a model-capability problem. He referenced autonomous vehicles, said there have been accidents, and asserted that the Department of Transportation has created a framework for risk mitigation that is used as a reference for autonomous vehicles and autonomous systems more broadly.

One example of a federal, safety-oriented guidance document for autonomy is a 2017 report, NHTSA Automated Driving Systems 2.0: A Vision for Safety. The document presents voluntary guidance and safety design elements for entities engaged in testing and deployment, and it discusses topics such as post-crash behavior and data recording.

A central theme of the episode was that enterprise AI becomes harder as it becomes more connected to real actions. As tools are given permission to access systems and complete tasks, accountability for outcomes must be assigned, documented, and maintained across changes in models, workflows, and staff.

"That's a trust, a level of trust that AI doesn't just generate, it doesn't create. That's something that a human builds and a human builds over time with another human."

The discussion did not claim that guardrails eliminate failures. It argued that guardrails determine whether failures are contained, detected, and attributable. In that framing, enterprise AI is not defined by a chat interface. It is defined by controlled access, specification, and governance that allows a system to be used without creating unmanaged organizational risk.

Sources


Article Credits