☁️ TURNBERRY AT DREAMFORCE ☁️

Turnberry is proud to be a part of Salesforce’s biggest and best event of the year! Are you attending? Click here to learn more.

Adopting AI sustainably while optimizing cost | Turnberry Solutions Skip to content
Close

Adopting AI sustainably while optimizing cost

Artificial intelligence (AI) is revolutionizing industries and modern workforces, and its influence is undeniable across domains, from predictive analytics to autonomous systems. As we marvel at AI’s potential to drive sustainability research and solve complex environmental challenges, we also need to confront a less visible reality: AI itself consumes significant resources. In this blog, we walk you through how employing the well-architected framework can pave the way for a more environmentally sustainable AI adoption that inherently translates to AI workload cost savings.

The unseen environmental footprint

Training large models and executing inference tasks require energy, and this adds up over time, contributing to environmental concerns:

  • A recent Google study unveils a median Gemini Apps text prompt consumes 0.24 Kh of energy and uses five drops of water. Which is less energy than a modern TV uses in just 9 seconds.
  • University of California estimated the global AI demand is expected to consume up to 6.6 billion cubic meters of water by 2027.
  • According to the International Electricity Agency, a typical Google search consumes 0.3 Wh of electricity while a ChatGPT request consumes 2.9 Wh, nearly a 10x difference.

With the rise of small language models and domain-specific models and intentional prompting techniques, ex. context engineering, there is no doubt the AI industry is trending towards a more sustainable future. This does not mean we, as AI users, should hit pause now on sustainable AI.

The well-architected framework

Reducing AI’s environmental footprint while maximizing its sustainable potential requires a strategic approach. The well-architected framework (AWS, Azure, GCP) is a strategic enabler that provides a set of general design principles guiding organizations to architect cloud solutions. The framework encompasses six pillars:

  • Operational excellence: focuses on team organization, workload design, execution, and management.
  • Security: focuses on system, data, workload, privacy protection and regulatory compliance in the cloud.
  • Reliability: focuses on achieving resilient and consistent workload design, implementation, and maintenance.
  • Performance efficiency: focuses on cloud resource utilization and optimization.
  • Cost optimization: focuses on delivering business value with the lowest cost.
  • Sustainability: focuses on reducing cloud applications’ environmental impact.

Each pillar includes design principles that guide architects and engineers towards building resilient, secure, and cost-effective solutions, without sacrificing agility or innovation. In the following sections, we focus on four pillars and itemize actionable activities that can help you build a sustainable AI future: sustainability, operational excellence, performance efficiency, and reliability.

Sustainability

Establish an environmental, social, and governance (ESG) goal

Any meaningful organizational change requires a strategic vision alignment. We recommend creating a cross-functional ESG vision and roadmap that aligns your AI initiatives with ESG goals, with measurable targets, such as energy reduction, environmental impact, and how they translate to cost savings.

Opt-in to renewable energy

Major cloud providers have been investing in renewable energy in their data centers, and you have the option to choose hosting AI workloads in locations based on both business requirements and sustainability goals. AWS’s guideline in region selection, under the sustainability pillar, is a great example of how to assess trade-off of AI speed-to-market and sustainability commitment.

Invest in renewable energy certificate

In recent years, energy credit trading systems have been created to promote energy-efficient R&D, and the system provides financial incentive for firms to focus on low-power AI alternatives through Renewable Energy Certificate.

Scale ESG commitment with vendors

We encourage you to review vendors’ ESG reports and third-party ratings, such as CDP, to gain confidence in your continuous ESG commitment through vendor partnerships.

Operational excellence

Measurable workloads enable organizations to embed accountability and transparency in driving sustainable AI adoption. In this section, we show you a few examples of immediately actionable items to start gaining visibility into your carbon footprint.

Measure and establish metrics

Let’s start by understanding where your organization is currently at in carbon, energy, and water footprint by establishing baseline metrics. AWS Customer Carbon Footprint Tool, Microsoft Sustainability Calculator, and Google Carbon Footprint provide ready-to-use tools to help measure and quantify your carbon usage. Measuring and monitoring these metrics internally and/or including them in ESG dashboards can also promote awareness and encourage sustainable AI usage.

Practice responsible AI usage

Establishing internal guidelines on responsible and ethical AI usage and tracking how AI models are utilized help your users understand the motivation for sustainability in AI. Responsible AI aims to promote transparency into how AI applications are developed and used as a force for good. Example frameworks and guardrails include:

Performance efficiency

Most AI applications involve data management, compute and hardware management, and network or content delivery. In this section, we present several ways to optimize your workload right away. While they are not directly sustainability related, optimizing application efficiency reduces energy consumption and cost and inherently reduces environmental impact for you.

Adopt usage-based data management

While this is a non-trivial recommendation, we observe many organizations deprioritize data management optimization due to the lack of cost saving to justify the efforts required. From low complexity, low maintenance to high complexity, high maintenance, we recommend three activities for you to start partaking with the ESG goal in mind:

Monitor and plan compute

Depending on your AI applications, the hardware required may differ and so does the cost of ownership. For all existing and upcoming production workloads, we strongly recommend making compute and network metric monitoring a foundational requirement. By gaining visibility into how your workloads are utilized, you can then implement optimization techniques such as right-sizing hardware, dynamically planning compute resources, and opting-in for specialized hardware.

Monitor and optimize traffic

Like the last section, we also recommend establishing traffic metrics to understand where your AI traffic comes from, its volume, and plan resources accordingly. While networking optimization methods are applicable here, such as implementing load balancing and dedicated connectivity, if you still observe uneven server traffic, we encourage you to investigate the AI inference code and hardware design pattern for optimization opportunities.

Reliability

Building reliable AI systems means designing for energy efficiency and resilience from the ground up. We walk you through two actionable recommendations: designing smarter energy-efficient AI workflow and employing edge-computing,

Energy-efficient AI workflow

When it comes to designing AI workflows, avoid defaulting to large language models for every task. Meta recently discussed the continued success of employing “old AI” while not expecting Agentic AI to have significant impact on revenue in the next two years. Old AI, or the classical predictive analytics and machine learning solutions, are still powering many businesses’ day-to-day intelligent workflows. Before designing an agentic solution, we strongly recommend evaluating if a simpler solution can already help achieve your goals.

Deploy AI at the edge

The World Economic Forum estimated that processing AI locally on devices can reduce energy consumption by 100 to 1,000 times per task. Shifting appropriate workloads to the edge can also reduce latency and the cost compared to hosting them in the cloud. For example, use on-device AI for document scanning instead of routing it to the cloud.

Conclusion

Environmentally sustainable AI adoption is not just an ethical imperative but also a strategic lever for cost optimization and a resilient AI strategy. In this blog, we discussed several actionable options to embark on your sustainable AI journey, but there are still more granular design considerations you need to assess. For example, how to select the appropriate AI workloads for edge deployment, considering the compliance requirements your organization needs to adhere to.

At Turnberry Solutions, we are committed to helping organizations navigate the complexities of sustainable AI adoption. Our expertise can guide you in implementing well-architected principles from the lens of not only the technical solutioning, but also the organizational AI strategy design and change management.

Check out how our Focus Forward workshop can help guide you along the AI journey.

Continue reading

News

Turnberry Solutions achieves ISO/IEC 27001 certification

Philadelphia, Pa., September 23, 2025 — Turnberry Solutions, a business and technology…

News

Turnberry Solutions integrates Rise and Shift to expand AWS offerings and accelerate client transformation

Minneapolis, Minn. — Sept. 18, 2025 — Turnberry Solutions, a leading business and technology consulting firm,…

Blog

Your guide to navigating career fairs with confidence

Each fall, our campus recruiting team looks forward to a new season of connecting with students…

Close