Open Positions
AI Engineer (Remote)
-
We are hiring an AI Engineer specializing in LLMs (Large Language Models), Retrieval Augmented Generation (RAG), and Generative AI. The role involves building advanced AI solutions that leverage state-of-the-art technologies.
Key Responsibilities:
Design, fine-tune, and deploy LLMs using frameworks like Hugging Face or OpenAI APIs.
Build and implement RAG pipelines with vector databases (e.g., Pinecone, FAISS).
Develop Generative AI solutions, including chatbots, summarization, and content creation tools.
Preprocess, clean, and annotate datasets for training and evaluation.
Optimize models for performance using techniques like quantization and pruning.
Deploy scalable solutions in production using Docker, Kubernetes, and cloud platforms (AWS, GCP, Azure).
Monitor and troubleshoot deployed models for accuracy and reliability.
Collaborate with cross-functional teams to align AI capabilities with business objectives.
Stay updated with advancements in AI/ML research and integrate best practices.
Requirements:
Proficiency in Python, ML & NLP frameworks (Hugging Face, spaCy, PyTorch, TensorFlow).
Strong understanding of transformer architectures and generative models like GPT or BERT.
Experience with vector search databases (Pinecone, Weaviate, Elasticsearch).
Familiarity with cloud-based AI platforms and containerization tools.
Excellent problem-solving, analytical, and communication skills.
Data Architect (Remote)
-
We are seeking a highly skilled and experienced Data Engineering Lead/Architect to join our dynamic team. The ideal candidate will have a proven track record of designing, building, and maintaining scalable data pipelines, with strong expertise in Python programming, cloud technologies, and large-scale data systems. If you have a passion for working with data and enabling AI/ML capabilities in products, we want to hear from you.
Key Responsibilities:
Design, develop, and maintain robust and scalable data pipelines to support analytics and machine learning applications.
Collaborate with cross-functional teams, including data scientists and software engineers, to implement data-driven solutions.
Optimize and manage data storage systems and ensure high availability, reliability, and performance.
Design, develop, and maintain robust and scalable ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) data pipelines to support analytics and machine learning applications.
Ensure data pipelines are optimized for efficiency, reliability, and scalability, handling both structured and unstructured data seamlessly.
Handle large-scale datasets, ensuring data integrity and consistency across platforms.
Provide technical expertise and mentorship to junior engineers and stakeholders.
Implement best practices in data engineering, including version control, testing, and deployment.
Stay updated with emerging technologies and tools in data engineering, AI/ML, and cloud ecosystems.
Requirements:
Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Minimum 5+ years of hands-on experience in data engineering or related roles.
Proficiency in Python programming and its data-processing libraries (e.g., Pandas, PySpark).
Proven expertise in handling large-scale data systems such as distributed databases, data warehouses, and data lakes.
Strong experience with cloud platforms (AWS, Azure, or GCP) and associated tools for data storage, processing, and orchestration.
Practical knowledge of data pipeline frameworks like Apache Airflow, Kafka, or Spark.
Hands-on technical expertise in designing and implementing end-to-end data solutions.
Familiarity with Generative AI (GenAI) and AI/ML technologies.
What We Offer:
Enjoy the flexibility to work from the comfort of your home, with no commute hassles.
Work directly with the CXO team, gaining valuable insights and contributing to strategic decisions.
Take the opportunity to initiate, own, and drive impactful data engineering projects across the organization.
Become a key member of the engineering leadership team, driving innovation and excellence within the data domain.
Work with state-of-the-art technologies in AI, ML, and data engineering.
Competitive compensation and ample opportunities for career growth.
Kubernetes Architect (Remote)
-
We are currently looking for an experienced Kubernetes Architect to join our team at Innowhyte Inc. If you have a passion for designing and implementing complex virtualized infrastructure in both bare metal and cloud environments, we’d love to hear from you!
Key Responsibilities:
Architect and implement virtualized infrastructure and software platforms across bare metal and cloud environments.
Lead and review designs, test plans, and go-to-market strategies.
Create and maintain architectural documentation using models such as C4, UML, and sequence diagrams.
Ensure high availability, security, and scalability across distributed systems.
Required Skills & Experience:
Deep expertise in Linux OS, networking, virtualization, and storage.
Strong knowledge of Kubernetes ecosystems (APIs, configuration, operations, best practices).
Experience designing and implementing bare metal Kubernetes solutions (LXC/LXD, Podman, K3s, etc.).
Hands-on experience with:
Container networking (CNI): Calico, Cilium, Multus, eBPF.
Container storage (CSI): Ceph, GlusterFS, Longhorn.
DevOps & CI/CD: GitLab, Helm, ArgoCD, Prometheus.
Extensive experience troubleshooting Kubernetes infrastructure and application environments.
Familiarity with cloud-native Kubernetes, particularly AWS.
Experience with Kubernetes cluster management tools (Rancher, Portainer, Rafay) is a plus.
Knowledge of Kubernetes operator development is a plus.
Strong collaboration, communication, and Agile product delivery skills.