At Syntaxia, we believe in empowering businesses through data and innovation—and it all starts with an exceptional team. Explore our current openings and discover how you can contribute to our mission of transforming data into actionable insights. Let’s shape the future of decision-making together.
Location: Atlanta, GA (Remote First)
Department: Revenue Team
Reports To: CEO
You Will Own
Top-of-Funnel Pipeline: Identify 50+ target accounts/month (Fortune 5000 CTOs, CDOs)
Market Intelligence: Map pain points in legacy data systems to our blueprints
Foundational Processes: Build Syntaxia’s first SDR playbook and outreach strategy
Key Responsibilities
Outbound Prospecting:
- Cold email/LinkedIn campaigns pitching our AI architecture frameworks
- Book 15+ discovery calls/month for founders to validate solution fit
Lead Qualification:
- Score leads using criteria co-developed with engineering team
- Document technical objections for product roadmap prioritization
Tool Mastery:
- Manage Syntaxia’s first CRM (Pipedrive) and sales engagement stack
- A/B test messaging around "data debt reduction" vs. "compliance-first AI"
Requirements
Non-Negotiables
- 1–2 years in tech SDR/BDR roles (SaaS, cloud, or data infra preferred)
- Basic understanding of data engineering concepts (ETL, APIs, cloud migration)
- Portfolio of 3–5 successful cold email templates from past roles
Skills We Value
- Experience with tools: Pipedrive, Basecamp, Notion, Loom
- Curiosity about AI/ML use cases in enterprise settings
- Self-starter who thrives in 0→1 environments
Compensation & Growth
Base Salary:
$50K–$55K
Commission:
- Earn 25% of every pilot deal you source
- $5K bonus for hitting 6-month quota
- Accelerate to 35% commission after exceeding targets
Promotion Path:
Transition to AE role
Location: Southeast Asia (Remote)
Department: Data Engineering
Reports To: VP of Engineering
Role Overview
As a Data Engineer at Syntaxia, you'll be part of an innovative team developing a next-generation semantic layer for a large enterprise. Your strong foundation in SQL databases and stored procedures, along with any exposure to Snowflake, will play a pivotal role in creating this advanced solution built on knowledge graph technology.
While familiarity with knowledge graphs isn't required upfront, you'll have the opportunity to learn and master this cutting-edge approach to data engineering. If you're confident in your SQL expertise and ready to expand your skillset into exciting new areas, this role is designed for your professional growth.
Key Responsibilities
SQL and Stored Procedures: Write, test, and maintain complex SQL queries and stored procedures to support core data workflows and business processes.
Semantic Layer Development: Support the creation of a semantic data layer using knowledge graph concepts, built on top of Snowflake as the primary data warehouse.
Data Modeling: Build and refine data models that reflect real-world business logic, ensuring they are easy to understand and efficient to query.
Cross-Functional Collaboration: Work closely with data scientists, product teams, and engineers to gather requirements and deliver practical data solutions.
Performance Optimization: Monitor and improve query performance, storage efficiency, and data access patterns within SQL-based environments, including Snowflake.
Documentation: Maintain clear and up-to-date documentation covering data flows, business logic, and technical decisions to support transparency and collaboration.
Skill Development: Learn and apply new techniques related to knowledge graphs and semantic data structures while contributing with your existing expertise in SQL.
Requirements
Non-Negotiables
- Bachelor's Degree in Computer Science, Information Technology, or a related discipline.
- Minimum of 3 years' professional experience in data engineering or related roles.
- Strong proficiency with SQL databases and extensive experience writing and optimizing stored procedures.
- Solid understanding of data warehousing methodologies and ETL processes.
- Proven experience in data integration, modeling, and performance tuning.
Differentiators
- Hands-on experience with Snowflake (Bonus if you have certification from Snowflake).
- Familiarity with cloud technologies (Azure).
- Proficiency in Python.
How to Apply
Excited about reshaping the data landscape with us? Submit the following materials:
- Your updated resume
- LinkedIn profile link
- Relevant portfolio or project samples (optional)