Breaking Ground

Construction • Industry • Power • Western PA

← Back to posts

AI: Opportunity and Risk

AI creates enormous opportunities...and risks.

By Timothy Barkebile and Harvey Ahn

3/11/2026

The advent of ChatGPT brought mainstream attention to technology that has been evolving quietly for decades. While artificial intelligence was already embedded in estimating platforms, logistics tools, and scheduling analytics across the construction sector, ChatGPT dramatically expanded access to artificial intelligence through its ease of use. For the first time, a novice user could generate genuinely useful content simply by entering a straightforward prompt.

Developed by OpenAI, ChatGPT is a conversational chatbot that generates responses in natural language. As professionals experimented with the tool, many discovered its value in handling routine, time-consuming tasks, such as drafting emails, summarizing lengthy documents, generating checklists, and organizing information. Competing tools quickly entered the market, and AI transformed from a specialized technical capability to a standard feature embedded in many business platforms.

The term “artificial intelligence” now encompasses a broad range of technologies. Machine learning systems analyze historical data to identify patterns and predict outcomes. Deep learning models interpret images, drawings, and sensor data. Generative AI tools create new content. Increasingly, integrated platforms combine multiple AI models to deliver coordinated functionality, and some systems can automate defined tasks with limited human intervention.

The potential benefits of AI in the construction sector are immense and exciting. Construction is a coordination business requiring constant communication, disciplined documentation, and complex decision-making across layered project delivery systems. When deployed with a clear understanding of its capabilities and limitations, AI can expand access to information and allow firms to redirect resources toward higher-value activities such as strategic analysis, creative problem-solving, risk management, and client engagement.

At the same time, expanded capability brings expanded risk. The same ease of use and broad accessibility that accelerated AI’s adoption also increase the potential for misunderstanding and misuse. Successful implementation requires acknowledging several practical realities, such as: 

  • Every AI model has defined strengths and limitations, and not every tool is suited to every use case. Selecting the appropriate model for the task is essential. AI should not be deployed without clearly defined use cases and oversight protocols. Automating workflows without understanding underlying assumptions can create operational blind spots and unintended consequences.
  • Outputs are only as reliable as the data on which they are based, making data quality and governance essential. Weak data controls can produce inaccurate outputs that drive flawed business decisions. In addition, uploading sensitive project information into unsecured or public-facing platforms may inadvertently compromise confidentiality protections, violate contractual obligations, or expose proprietary data.
  • Effective use of AI requires meaningful training. Users must understand both the functionality and the boundaries of AI tools. Without a baseline understanding, employees may misuse systems, input inappropriate data, or misinterpret outputs. At the same time, failing to upskill staff may leave firms competitively disadvantaged as peers adopt AI more strategically.
  • AI cannot replace professional judgment. While it can inform and enhance decision-making, the end user remains responsible for the accuracy and integrity of the final deliverable. Treating AI-generated analyses, summaries, or recommendations as authoritative without scrutiny and verification risks incorporating errors into bids, schedules, compliance documentation, and client communications. Overreliance may increase exposure to contractual claims, tort liability, professional liability, or reputational harm. Ultimately, the end user is responsible for actions taken in reliance upon AI output and cannot later point a finger of blame at an AI platform.

The risks associated with these realities can arise in unexpected ways across an organization. The following are a few practical and legal considerations relating to the use and adoption of AI tools.

Unsupervised adoption. Without approved tools or clear guidance, employees may turn to whatever is convenient, a phenomenon often called “shadow AI.” This can reduce organizational control over both the tool and the information entered into it. Many AI services reserve the right to use user input to improve their systems, including training AI models. Uploading drawings, pricing, contract language, or sensitive project discussions could inadvertently disclose confidential information, weaken trade secret protection, or violate contractual confidentiality obligations.

Intellectual property concerns. AI-generated works may not qualify for copyright protection, potentially limiting exclusivity even if the content is valuable. Terms of use may also restrict ownership, as a company may receive only a limited license to outputs, while vendors retain intellectual property or other rights. Certain contracts may shift risk to the customer, including indemnity provisions requiring the company to assume liability if disputes arise from AI-generated content.

Employment discrimination. Employment-related AI tools present additional considerations. Many recruiting platforms now use AI-driven scoring or ranking. Even if the company treats outputs as advisory, regulators may view them as part of the decision-making process. If the system contributes to discriminatory outcomes, liability may rest with the employer rather than the software vendor.

Note-taking applications. These tools can record, transcribe, and summarize conversations, raising potential privacy and consent issues. In jurisdictions that require participant consent, recording or transcription may violate wiretapping or privacy laws. Even where legal, companies must consider data storage, access, and whether sensitive discussions are retained in ways that increase security or litigation exposure.

Data location and handling. Some AI tools process information on servers outside the United States. Specific projects, particularly government contracts, may impose restrictions on data handling. AI may also process personal information, employee data, or protected health information, triggering privacy and data-protection obligations even when used internally.

Disclosure of AI use. Companies may need to inform clients or other stakeholders if AI materially influences service delivery, professional judgment, security practices, or handling of client information. Contractual language addressing AI use is becoming more common, and public companies may have disclosure obligations related to AI adoption.

Legal liability. It is also important to remember that AI does not transfer responsibility. If a design error, cost miscalculation, or scheduling mistake stems from AI-generated output, the contractor or professional remains accountable. AI can support decision-making, but human review and verification are essential before outputs are relied upon. Physical applications of AI, such as robotics, equipment monitoring, autonomous machinery, and safety-monitoring cameras, introduce additional considerations. Failures in these systems can create safety risks, and data collection (including worker location or biometric information) raises questions of consent and contractual risk allocation. Many construction contracts have yet to address responsibility for incidents arising from AI-enabled systems.

New emerging standards?

Acknowledging these risks does not justify disregarding AI altogether. The rise of artificial intelligence in construction is redefining what’s considered competent and responsible practice. AI tools, ranging from generative design and predictive safety analytics to automated quality inspections, are enabling faster, safer, and more efficient project delivery. As these technologies become mainstream, industry professionals may be expected to adopt them to meet evolving standards of care. In short, AI is not just a tool but is establishing a new set of expectations. Firms that integrate AI responsibly stand to reduce risk and improve outcomes, while those that ignore it may face heightened liability as the standard of care advances.

Conclusion

Companies adopting AI often emphasize governance to maximize benefits while managing risk. This can include defining use cases, reviewing vendor terms, training employees, setting data entry rules, and establishing internal review processes. While not yet strictly required, some organizations are creating oversight structures to ensure adoption does not outpace control.

AI is widely recognized as a disruptive and transformative technology due to its versatility and broad applicability. In construction, it offers significant potential benefits but also carries risk. Without preparation and responsible use, AI may introduce liability rather than efficiency. Firms can turn risk into opportunity by understanding AI’s capabilities and limitations, implementing appropriate controls, and ensuring its use strengthens project outcomes.