AWS’s Carbon Footprint Tool: Still Not Fit for Purpose

Cloud providers are under increasing pressure to provide carbon transparency, especially as sustainability linked regulation ramps up across the world. This week week AWS finally updated its Customer Carbon Footprint Tool (CCFT) to version 2.0, claiming improved allocation of emissions across services and customer accounts.

So... does this match reality? TLDR - No, this is still not a fit-for-purpose tool for enterprise customers aiming to run sustainable digital operations or meet ESG related net-zero commitments.

What’s Actually Changed?

  • AWS now uses a new top-down emissions allocation model

  • Emissions are assigned from cluster-level infrastructure to services, and then to customers.

  • Where possible, they allocate by normalised service usage (e.g. GB-month, vCPU-hours).

  • But where data is missing, they fall back on economic allocation (that means revenue-based estimates, not actual consumption).

To be crystal clear: much of the data is still missing. The underlying data sources haven’t materially changed. AWS continues to rely on:

  • Planned power draw instead of real energy use,

  • Market based metrics for scope-2 (despite talking about location based)

  • Forecasted usage instead of real-time telemetry,

  • Historical trends and smoothing curves rather than direct measurement,

  • Partial invoice coverage (some data centres, some of the time).

What’s Still Missing?

Plenty. In fact, all the things that matter for meaningful emissions management:

Scope 3 emissions - Still completely excluded. That means no embodied carbon from servers, no data centre construction or hardware manufacturing, no downstream or supply chain emissions. This is often 50–80% of a digital service’s footprint.

Granularity and visibility - Customers get monthly estimates (with a 3-month lag) at the account, region, and top-level service level only. There is:

  • No breakdown by instance type, S3 storage class, or EBS vs EC2 split.

  • No reporting by tags, projects, or business units.

  • No ability to see emissions from network traffic, load balancing, or shared orchestration layers.

  • No mention if PaaS offerings are included

Carbon intensity metrics - Nothing. No gCO₂e per vCPU-hour. No carbon per GB stored or transferred. No per-inference reporting for ML workloads. You still have no way to benchmark, optimise, or build internal KPIs.

Real workload attribution

  • If you want to assign emissions internally, to teams, products, cost centres, you can’t.

  • If you want to compare deployment regions based on carbon efficiency, you can’t.

  • If you want to optimise for low-carbon architectures, you’re flying blind.

What Users Actually Need

To make this tool actually useful to large organisations with real decarbonisation goals, AWS would need to:

  • Include Scope 3 emissions, starting with embodied hardware and supply chain

  • Include transparent location based scope 2 metrics

  • Report detail below the payer account level

  • Share actual power and water metrics

  • Replace proxy-based allocation with real-time, usage-based telemetry

  • Offer insights at the tag, workload, and business unit level

  • Provide carbon intensity metrics across all services and regions

  • Disaggregate emissions from infrastructure layers like network and control plane

Final Thought

AWS CCFT 2.0 still functions more like a made up carbon invoice that misses 90% of your emissions than a carbon intelligence tool. It helps you report something, but misses way too much. It’s high-level, averaged, and still relies too heavily on economic estimates and smoothing assumptions.

For anyone trying to do the right thing, or under pressure from regulators, investors, or internal net-zero goals, this tool won’t get you to where you need to be.

As usual with AWS, it's a sticking plaster designed to let them continue to pretend their services are sustainable

Next
Next

Auditing ITAD Partners