
Hitachi Energy Ireland Limited
Providing innovative energy solutions for a sustainable, electrified future.
Research Intern – Explainable AI and Reporting Framework for Power Grid Machine Learning Applications | IA explicable et système de rapport pour les applications d'apprentissage automatique dans les réseaux électriques
Research intern to design/evaluate XAI and reporting frameworks for ML in power grids.
Job Highlights
About the Role
The intern will support the design and evaluation of explainability and governance‑aligned reporting frameworks for machine‑learning models used in power‑grid applications, working under the supervision of research scientists at the Hitachi Energy Research Center. This role can be performed remotely or in a hybrid setting across Canada. Key responsibilities include conducting literature reviews and comparative assessments of ML explainability and trust‑building techniques, investigating frameworks for model transparency, reproducibility, and traceability, and exploring the applicability of methods such as SHAP, LIME, and surrogate models to industrial workflows. The intern will prototype research concepts using Python and modern ML tooling, and document findings in technical reports or publications. • Conduct literature reviews and comparative evaluations of ML explainability and trust‑building techniques. • Investigate frameworks for model transparency, reproducibility, and traceability in industrial ML workflows. • Prototype and test explainability methods (e.g., SHAP, LIME, surrogate models) using Python and modern ML tools. • Document research findings and present them in technical reports or publications.
Key Responsibilities
- ▸literature review
- ▸model transparency
- ▸method prototyping
- ▸python coding
- ▸technical reporting
- ▸framework design
What You Bring
Candidates should be PhD students or candidates in Computer Science, Engineering, Machine Learning, or Software/Electrical Engineering, or senior master’s students with thesis experience. A solid understanding of machine‑learning concepts, software‑engineering practices, model evaluation, validation pipelines, and CI/CD is required, along with familiarity with interpretability techniques and proficiency in Python and libraries like Scikit‑learn, TensorFlow/PyTorch, SHAP, and LIME. Strong research experience, problem‑solving skills, and effective written and verbal communication are essential. Preferred qualifications include prior work with interpretability or explainability libraries, familiarity with AI governance frameworks such as the EU AI Act, AI Verify, or NIST AI RMF, authorship of first‑author publications in top AI/ML venues, critical and innovative thinking, leadership in idea realization, and experience with model‑driven software engineering. • Required: PhD or senior master’s students in CS, ML, software/electrical engineering with strong ML and software engineering knowledge. • Proficiency in Python and ML libraries such as Scikit‑learn, TensorFlow/PyTorch, SHAP, and LIME. • Research experience involving problem definition, solution exploration, result analysis, and clear communication. • Preferred: prior experience with interpretability libraries and AI governance frameworks (EU AI Act, AI Verify, NIST AI RMF). • Preferred: first‑author publications in top AI/ML conferences/journals and experience with model‑driven software engineering.
Requirements
- ▸phd
- ▸python
- ▸tensorflow/pytorch
- ▸scikit‑learn
- ▸research
- ▸interpretability
Benefits
Hitachi Energy Canada Inc. is offering a full‑time, paid Research Intern position located in Toronto, Ontario, with the flexibility to work remotely or in a hybrid format within Canada. The internship runs for four months, with the possibility of extension to six months, and is scheduled to start on April 6, 2026 (flexible). • Paid internship lasting 4 months (extendable to 6 months), remote or hybrid within Canada, flexible start date in April 2026.
Work Environment
Hybrid