DevOps Engineer
/
Engineering
Remote
What you'll do
We're seeking a DevOps Engineer to build and maintain the infrastructure that powers large-scale web crawling, real-time data processing, and API services serving millions of requests. You'll design systems that handle massive datasets while ensuring 99.9% uptime for customer-facing services.
This role requires expertise in cloud infrastructure, container orchestration, and data pipeline optimization. You'll work directly with our engineering team to scale systems processing 250 million domains and serving technographic intelligence to thousands of users globally.
Design and maintain cloud infrastructure on AWS/GCP supporting web crawling, data processing, and API services at scale
Build automated deployment pipelines with CI/CD systems ensuring safe, frequent releases of platform updates
Optimize data processing workflows handling terabytes of crawled web data and historical technology profiles
Monitor system performance and implement alerting for crawling infrastructure, API endpoints, and data pipeline health
Implement security best practices including secrets management, network isolation, and compliance controls for customer data
Scale containerized applications using Kubernetes for microservices handling real-time technology detection
Maintain database systems supporting both transactional APIs and analytical queries across 20 years of historical data
Who you are
3+ years of DevOps or platform engineering experience with demonstrable expertise in cloud infrastructure
Strong containerization skills using Docker and Kubernetes for production workloads
Infrastructure as Code proficiency with Terraform, CloudFormation, or similar tools
CI/CD pipeline expertise using GitHub Actions, Jenkins, or equivalent automation platforms
Database administration experience with both SQL and NoSQL systems at scale
Monitoring and observability skills using tools like Prometheus, Grafana, DataDog, or New Relic
Security-focused mindset with understanding of compliance requirements and data protection
Technical Stack
Cloud Platforms: AWS (preferred) or Google Cloud Platform
Containers: Docker, Kubernetes, Helm charts
Infrastructure: Terraform for infrastructure management
Databases: PostgreSQL, Redis, Elasticsearch for different data patterns
Monitoring: Prometheus/Grafana stack with custom metrics
Languages: Python, Go, and shell scripting for automation
Message Queues: Apache Kafka or similar for data streaming
What we offer
Competitive salary with equity participation in our growth
Flexible remote work with home office setup budget and flexible hours across time zones
Opportunities for growth including annual learning budget and technical conference sponsorships
Supportive team culture with quarterly offsites, monthly tech talks, and direct access to leadership
Additional Benefits
Comprehensive health stipend covering medical, dental, and vision
Mental health and wellness support
Flexible PTO with minimum 20 days encouraged
High-spec equipment and monitor stipend
Annual budget for professional development and technical certifications
Our Values
Privacy-first, compliance-driven data practices
Engineering excellence and correctness over hype
Customer impact measured in revenue outcomes
Remote-first collaboration with written rigor and high ownership
Ready to Apply?
Send your resume highlighting your experience with large-scale infrastructure and data systems. Include examples of systems you've built or maintained, challenges you've solved, and measurable improvements you've delivered. Tell us about a time you scaled infrastructure to handle significant growth.
We're committed to building a diverse team and encourage applications from candidates of all backgrounds.