Role Description: Full Stack / Data & AI Engineer Intern
Compass Analytics is excited to recruit a talented and driven Intern to join our growing Data & AI Delivery team. The ideal candidate will have a passion for designing, developing, and deploying cutting-edge AI and ML applications, as well as a strong foundation in MLOps and software engineering best practices.
About Compass Analytics
Compass Analytics specializes in delivering end-to-end data solutions to clients across diverse industries ranging from Aviation, Healthcare, Financial Services, Retail, and Sports. Our expertise spans across multiple disciplines including Data Architecture, Data Engineering, Data Governance, Artificial Intelligence & Machine Learning, Process Automation, and Dashboarding. We pride ourselves on fostering a culture that emphasizes purpose in what we are building for our clients, elegance and rigor in how we work with everyone, and a fun environment as we get work done.
Based in Montréal, Québec, Compass Analytics is composed of talented data professionals who support their clients in their data transformation initiatives. Our clients lean on us to bring their data ideas into reality.
Role Details
Position: Full Stack / Data & AI Engineer Intern
Number of Open Roles: 1-3
Location: Remote or Hybrid in Montreal
Duration: Intern Full-Time (4 or 8 months)
Key Responsibilities:
Under the guidance of our leadership team and tech leads, the Data & AI Engineer Intern will be responsible for the following tasks:
- Collaborate closely with cross-functional teams and stakeholders to deliver data-driven and AI-powered solutions aligned with business objectives.
- Design, build, and maintain scalable data pipelines and architectures, ensuring reliable ingestion, transformation, and delivery of data.
- Collect, preprocess, and engineer features from structured and unstructured data to support analytics, machine learning, and AI use cases.
- Build and maintain back-end services and APIs for data/AI products, including feature retrieval, model inference, and data access layers
- Maintain comprehensive technical documentation covering model architectures, data pipelines, experiments, and deployment processes.
- Stay informed on advancements in AI/ML research and recommend adoption of emerging tools, frameworks, and techniques.
- Support Data Engineering and Analytics projects as well as other projects within the scope of the assigned mandate (e.g., dashboarding or pipeline automation).
Technical Capabilities:
- Programming Languages: Proficiency in SQL and Python is essential; experience with PyTorch, Spark, Scala, or similar technologies is a plus.
- Cloud Platforms: Experience or familiarity with Databricks, Snowflake, AWS, Azure, or Google Cloud.
- Full Stack: Familiarity with modern Web App & API frameworks including React.js/Next.js for frontend and Flask or FastAPI for backend
- GenAI: Familiarity with modern GenAI frameworks (e.g., LangChain, LangGraph) and hands-on experience building AI-powered applications is a plus.
- MLOps / DataOps: Familiarity with CI/CD pipelines, Git-based version control (e.g., GitHub), and deployment workflows.
- Collaboration Tools: Experience with JIRA and Confluence is a plus.
Minimum Requirements:
- Education: Currently enrolled in or recently completed a Bachelor’s degree in Computer Science, Software Engineering, Math/Physics, or a related quantitative field.
Preferred Qualifications:
- Problem Solving: Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
- Organized & Rigorous Approach: Strong organizational skills and attention to detail, ensuring rigor in development and accuracy in outputs.
- Communication: Excellent verbal and written communication skills, with the ability to collaborate effectively across teams.