You’ve probably done it: you’re rushing to validate a change, you need temporary SSH or RDP access, and you paste your current public IP into a security group / NSG / firewall rule. Then you forget about it. A week later your ISP rotates your address, your access breaks, and someone “temporarily” opens the rule to 0.0.0.0/0 to unblock a deploy. That temporary change becomes permanent. Congratulations—your vulnerability blast radius just grew quietly.
The problem isn’t that engineers don’t know better. It’s that IP-based allowlisting is still the lowest-friction control we reach for when we need “just enough” access during build, migration, incident response, or platform bootstrap. The tension is that the operational reality (dynamic IPs, CI runners, remote work, ephemeral environments) doesn’t match the static reality we encode into Terraform.
This article shows how to use the Build5Nines/myip/http module to eliminate manual IP lookups and keep firewall rules aligned with where Terraform actually runs—your laptop, a build agent, a GitHub runner—while also being honest about the failure modes and how to keep this pattern from turning into noisy drift.
Using the Build5Nines MyIP Terraform Module in Real Deployments
HashiCorp Terraform has steadily moved teams toward higher-level composition: modules, reusable patterns, and platform-level guardrails. At the same time, access control has shifted in two opposing directions:
- Zero-trust and identity-based access (great, but not always available during bootstrap or for legacy targets).
- Short-lived “pragmatic” allowlists (still everywhere: databases, Kubernetes control planes, jump hosts, admin APIs).
In early Terraform days, teams tended to inline everything. Today, most real production stacks are assembled from modules—some internal, some third-party—and the best ones do one thing well.
The Build5Nines/myip/http module is exactly that: a tiny data module that returns the public IP address of the machine executing the Terraform deployment so you can feed it into firewall rules as a CIDR. It exposes a single output (ip_address) and gives you controlled variability via inputs like url and request_headers.
If you’re building platforms, you should recognize the value of this approach: treat “where Terraform runs” as an input, not as a manual step someone performs on a Tuesday afternoon.
Problem or Tension
The core challenge isn’t retrieving an IP address. The challenge is making IP-based access safe and predictable in systems designed for immutability and repeatability.
Here’s what engineers run into in practice:
- Drift-by-design
- Your IP changes, so Terraform plans a change every time you run it from a different network.
- That change is “correct,” but it creates constant churn in environments where you want stable plans.
- CI/CD unpredictability
- Hosted runners often egress from shared pools. The IP can change between runs—or even within the same day.
- The module will faithfully return the runner’s egress IP… which may not be something you want to allow into prod.
- Operational coupling
- The firewall rule becomes coupled to the location of the operator or pipeline.
- That can be desirable for temporary admin access, but dangerous if it accidentally becomes part of a long-lived baseline.
- False sense of security
- Allowlisting an IP is not identity. It’s a network hint.
- If you’re relying on it as the primary control, you’re building on sand.
So the real question isn’t “How do I get my public IP in Terraform?” It’s:
When is it appropriate to bind access rules to the Terraform execution environment, and how do we do it without turning Terraform into a perpetual diff machine?
Insight and Analysis
The right mental model: “execution context as a first-class input”
Most Terraform modules model infrastructure inputs: CIDRs, subnets, names, SKUs, tags. But who is running Terraform (and from where) is usually treated as out-of-band.
This module pulls that execution context back into the graph. That’s powerful—but it changes how you should think about stability:
- Stable inputs (VNet CIDR, cluster name) define your baseline platform.
- Ephemeral inputs (current public IP) define session-scoped access.
If you mix them without a plan, you get noisy diffs and brittle automation.
So use this module intentionally in one of these patterns:
- Bootstrap and break-glass
- Temporarily lock down SSH/RDP while building or repairing access paths.
- Developer sandbox
- Where churn is acceptable and speed matters.
- Controlled CI egress
- Only when your pipelines have stable outbound IP (self-hosted runners, NAT gateway, fixed egress).
If you’re trying to apply this to “always-on prod admin access,” you’re probably solving the wrong problem.
What the module actually does (and what that implies)
At a high level:
- It performs an HTTP GET against a configurable url (defaulting to an IPv4 endpoint).
- It trims the response into a clean IP string.
- It exports it as
ip_address.
That simplicity is exactly why it’s useful: no providers beyond the standard HTTP data source behavior, no resources, no lifecycle complexity. But because it depends on an external HTTP request, it inherits all the operational realities of that dependency:
- If outbound internet access to the modules configured
urlis blocked from the execution environment, the plan/apply fails. - If the endpoint is slow or flaky, Terraform becomes slow or flaky.
- If the response changes format, your downstream rules break.
This is where platform engineering discipline matters: treat the lookup endpoint like a dependency, not a cute trick.
Step 1: Add the module and expose the IP
Minimal usage is intentionally boring:
module "myip" {
source = "Build5Nines/myip/http"
}
output "my_public_ip" {
value = module.myip.ip_address
}
Run terraform apply and you’ll see the IP in outputs.
Operationally, this gives you:
- A repeatable way to retrieve the current public IP
- A value you can feed into any resource attribute expecting an IP or CIDR
This is your foundation. The next step is deciding how tightly you want to couple this to firewall rules.
Step 2: Use it safely in firewall rules (Azure, AWS, GCP)
Most control planes want CIDR notation. For a single host, that’s usually /32 (IPv4). The module returns the IP, so you append the mask at the point of use.
Azure NSG rule example (SSH):
module "myip" {
source = "Build5Nines/myip/http"
}
resource "azurerm_network_security_rule" "allow_ssh" {
name = "AllowSSHFromMyIP"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "${module.myip.ip_address}/32"
destination_address_prefix = "*"
resource_group_name = azurerm_resource_group.main.name
network_security_group_name = azurerm_network_security_group.main.name
}
AWS security group ingress rule example (SSH):
module "myip" {
source = "Build5Nines/myip/http"
}
resource "aws_security_group_rule" "allow_ssh" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${module.myip.ip_address}/32"]
security_group_id = aws_security_group.main.id
}
GCP firewall rule example (SSH):
module "myip" {
source = "Build5Nines/myip/http"
}
resource "google_compute_firewall" "allow_ssh" {
name = "allow-ssh-from-myip"
network = google_compute_network.main.name
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["${module.myip.ip_address}/32"]
}
These patterns are straightforward. The nuance is in where you apply them.
Practical guidance: keep these “my IP” rules clearly labeled and scoped:
- Put them in a dedicated security group / NSG rule collection.
- Use explicit naming (AllowSSHFromMyIP) so they’re easy to audit.
- Prefer narrow ports and targets (don’t combine with broad admin access).
Step 3: Control the lookup endpoint (and why you should)
The module defaults to querying a plain-text IPv4 endpoint and sending a minimal Accept: text/plain header.
That’s fine for most use cases. But production engineering asks different questions:
- Do you trust that endpoint?
- Is it reachable from your runners?
- Do you need to pass through a proxy or internal gateway?
- Do you need authentication headers?
Custom URL example:
module "myip" {
source = "Build5Nines/myip/http"
url = "https://api.ipify.org"
}
Custom HTTP headers example:
module "myip" {
source = "Build5Nines/myip/http"
request_headers = {
Accept = "text/plain"
User-Agent = "Terraform"
}
}
This is more important than it looks. In many enterprise environments, outbound access is restricted. A common platform approach is to provide a sanctioned internal endpoint (or an egress proxy) for “what is my public IP” queries. With url and request_headers, you can integrate cleanly without forking the module.
Using a custom url can also be used to point to your own IP address lookup web app / endpoint. You could host your own internal IP lookup service endpoint that would always return the private IP address of the client accessing it. This would help you dynamically configure IP rules based on a privately discovered IP; rather than Public Internet IP using the modules default behavior.
SRE Note: if your incident response or break-glass process depends on this, make the endpoint highly available—or accept that “no internet” means “no access changes.”
Step 4: Avoid the two biggest failure modes
Failure mode #1: Turning your baseline into a moving target
If you embed “my IP” rules into your foundational modules (VPC/VNet, shared security groups, core clusters), you’re encoding an inherently variable input into a system you want stable.
Better pattern:
- Keep baseline security rules stable and identity-based where possible.
- Keep “my IP” rules as an overlay applied by a separate module or a separate Terraform workspace used for admin access.
Think of it like this:
- Platform workspace: builds the house.
- Access overlay workspace: temporarily opens a window when you need to get in.
That separation keeps drift contained and makes it easier to reason about changes during reviews.
Failure mode #2: CI/CD egress roulette
Yes, you can use this module in CI/CD. It will return the public IP of the runner that’s executing the HashiCorp Terraform deployment.
But whether you should depends on your runner model:
- Self-hosted runners behind controlled NAT: great fit—stable egress, predictable allowlist.
- Hosted runners with shared egress pools: risky—your allowlist becomes unpredictable and could unintentionally allow broad/shared infrastructure.
If your pipeline needs network access to private endpoints (databases, APIs, clusters), the mature solution is usually fixed egress:
- NAT gateway with known public IPs
- Egress firewall with static addresses
- Private connectivity (peering, VPN, PrivateLink, PSC) so IP allowlisting is not the control
Use this module to confirm or wire those controls—not to pretend hosted runner IP churn is a good foundation.
A pragmatic framework: “session access” vs “service access”
Here’s a simple rule that prevents most abuse of this pattern:
- Session access: human-in-the-loop, time-bounded, used for debugging, bootstrap, break-glass.
- This module is a strong fit.
- Service access: machine-to-machine, continuous, production-grade.
- Prefer identity, private networking, or stable egress—don’t bind it to whoever ran Terraform last.
When you treat module.myip.ip_address as session-scoped, everything becomes cleaner:
- Reviews are easier (“this rule is for operator access”)
- Drift is expected and contained
- Platform baselines remain stable
If you treat it as service-scoped, you end up normalizing constant change—or worse, training the team to ignore diffs.
Conclusion
The Build5Nines/terraform-http-myip module solves a deceptively expensive operational problem: eliminating manual IP lookups and keeping IP allowlists aligned with the real execution environment. It’s tiny, composable, and immediately useful for tightening access during bootstrap, debugging, and controlled environments.
The key is to use it with the right mental model:
- Your public IP is not configuration—it’s execution context.
- Treat it as session-scoped access, not baseline security posture.
- Keep it isolated (overlay/workspace/module boundary) so drift is intentional.
- In CI/CD, only rely on it when egress is stable—or fix egress first.
Used this way, you get the best outcome: tighter access with less manual work, without turning Terraform plans into noise or training your team to accept risky shortcuts.
Original Article Source: Stop Hard-Coding “Local IP” in Terraform: Lock Down Firewalls Dynamically written by Chris Pietschmann (If you're reading this somewhere other than Build5Nines.com, it was republished without permission.)
Microsoft Azure Regions: Interactive Map of Global Datacenters
Create Azure Architecture Diagrams with Microsoft Visio
Stop Hard-Coding “Local IP” in Terraform: Lock Down Firewalls Dynamically
Terraform Functions and Expressions Explained
IPv4 Address CIDR Range Reference and Calculator





