
Software Development Engineer
Job Type : Contract/W2/C2C
Visa Status : Any Valid Visa with Proper Work Authorization
Salary : Negotiable based on Experience
Contract Duration : 1 year+, with 4 years option
Onsite Location : Annapolis, Maryland; Seattle, Washington; Oklahoma City, Oklahoma; Pierre, South Dakota
Vacancy : 4
Job summary
Dhaka Technologies Limited is looking for four (4) Software Development Engineers for our clients in Annapolis, Maryland; Seattle, Washington; Oklahoma City, Oklahoma; Pierre, South Dakota. This is a Hybrid Full Time(40 hrs) position.
Essential Functions
Design and Development:
- Collaboration: Work with cross-functional teams to design and implement software solutions.
- Event-Driven Design: Apply principles to build scalable and resilient applications.
- Microservices: Develop micro services, REST APIs using Spring Boot for hyper scale traffics.
- Data Ingestion: Design and develop hyper scale data ingestion pipeline both batch and stream using Spark, Flink, Airflow, Kinesis, Data factory and relevant technologies.
- User Interfaces: Create responsive UIs using React.
AI Agent:
- Agentic Workflow: Design and develop multimodal AI Agents with LangChain, LangGraph, CrewAI, Amazon BedRock AgentCore, RAG and other state of the art technologies.
- Vector: Implement data persistence with vector databases and storage solutions such as PineCone, Chroma, S3 vector, Redis vector, etc.
- Semantic Search: Engineer semantic search using Elastic Search, Open Search, etc.
- Large Language Models: Integrate LLMs such as GPT, Claude, Gemini, LLaMA, Grok into Agent.
- Fine Tune: Expert in fine tuning models based on requirements to optimize agent performance.
Message Brokers:
- RabbitMQ: Work with RabbitMQ for message queuing and event streaming.
- Kafka: Understand architectural differences if experienced with Kafka.
- Other: AWS SNS, SQS. Azure Service Bus, Event Hub.
- Implementation: Develop message producers and consumers.
Rules Engine & Orchestrator:
- Workflow Orchestration: Design and implement orchestration pipelines and state machines to manage complex, multi-step processes across microservices, ensuring fault tolerance, retry strategies, and high availability.
- Rules Engine Development: Build and maintain rules engines (e.g., Drools, custom rule evaluators) for dynamic business logic evaluation, enabling configuration-driven workflows and reducing hardcoded dependencies.
- Dynamic Decisioning: Implement event-driven triggers and conditional workflows based on real-time system states, historical data, and rule-based prioritization to optimize operational efficiency.
- Integration with Services: Develop orchestrators that integrate seamlessly with internal APIs, data ingestion pipelines, and distributed event buses (e.g., Kafka, Event Hub) to ensure synchronized and reliable operations.
- Monitoring & Observability: Incorporate logging, tracing, and audit mechanisms to track orchestration steps and rule evaluations for debugging, compliance, and performance optimization.
- Scalability & Performance: Optimize orchestrator throughput and rule evaluation performance by leveraging caching strategies, asynchronous execution, and load-balanced distributed designs.
IoT Systems & Edge Integration:
- Device Connectivity & Ingestion: Architect and implement secure and scalable ingestion pipelines for IoT data using services like IoT Hub, MQTT, and custom protocols to connect thousands of distributed devices reliably.
- Edge Computing: Design and deploy lightweight compute modules at the edge (using Azure IoT Edge, AWS Greengrass, etc.) for low-latency processing, filtering, and local decision-making before cloud transmission.
- Telemetry & Monitoring: Capture and process real-time telemetry data for anomaly detection, predictive maintenance, and performance insights using streaming platforms like Event Hub or Kafka.
- Security & Identity Management: Enforce authentication, encryption, and certificate-based identity for devices, and implement device lifecycle management with role-based policies and key rotation.
- Data Routing & Integration: Route processed data to various storage and analytics systems (e.g., Blob Storage, Cosmos DB, Data Lake) and enable downstream consumers via event-driven architecture and APIs.
- Fleet Management & OTA Updates: Build centralized device management interfaces and pipelines for firmware updates, configuration changes, and remote diagnostics across large-scale device fleets.
- Scalability & Resilience: Design for elastic scaling, fault isolation, and backpressure handling to ensure high availability and reliability of the IoT platform under fluctuating loads.
Deployment and Infrastructure:
- Kubernetes: Deploy microservices, Rules Engine, AI Agents, workflows, and other system components to Kubernetes clusters using CI/CD pipelines.
- Monitoring: Monitor and troubleshoot production systems.
- Optimization: Enhance application performance and scalability.
Quality Assurance:
- Testing: Write unit tests using Junit, NUnit, Moq, etc aiming for 85% coverage and participate in peer code reviews.
- Debugging: Conduct system testing and debugging activities.
Security & Compliance:
- Threat modelling & Risk Assessment: Create, analyze, and implement Threat Models of the system to assess vulnerabilities and security threats.
- Security Enforcement: Enforce system and application-level security and compliance via GW, IAM, OAuth 2.0, RBAC, Rate Limiting and other best practices to prevent unauthorized user access, data security, DDOS attack and overall security of the system.
- Encryption & Data Protection: Design and implement end-to-end encryption strategies (e.g., TLS 1.3, AES-256, RSA) for data in transit and at rest, alongside secure key management practices, to guarantee data confidentiality and prevent data exfiltration.
- Vulnerability Management: Collaborate with DevSecOps to monitor, detect, and patch vulnerabilities using automated scanning tools, security pipelines, and continuous monitoring of security posture.
Documentation and Communication:
- Technical Documentation: Document technical specifications, architecture, and design decisions.
- Design Documentation: Include sequence diagrams for major use cases.
- Collaboration: Work with product managers, designers, and engineers to define requirements and deliverables.
SKILLS, EXPERIENCE, & CAPABILITIES:
Meet the following minimum qualifications:
- BS or MS or PhD in Computer Science or related field
- 8+ years of relevant experience in software development.
- Strong knowledge of distributed systems.
- Strong knowledge of microservices and event driven architecture.
- Strong AI Agent design and development expertise including multimodal capabilities.
- Fundamental knowledge of ML, LLMs, embeddings, transformers, and vectors.
- Demonstrated, concrete system, data security and compliance (HIPAA) experience.
- Expertise in design patterns.
- At least 5 years of experience in distributed system development.
- At least 2 years of experience with Orchestrators, State Machines and Rules Engine (Drools) in production environment.
- Minimum 2 years of experience with IoT integration with cloud platform (AWS, Azure, GCP).
- Familiar with Java, C#, Python, C++, and other similar languages.
- Experience with Spring framework and multi-threading (Executor service, ForkJoinPool).
- Familiarity with Kubernetes for container orchestration.
- Front-end experience with React and/or Angular.
- Experience with RabbitMQ or Kafka and SQS, SNS for message queuing. ∙Familiarity with RESTful APIs and web services.
- Elastic search, Open Seacrh experience is a plus.
- Excellent verbal and written communication skills.
- Ability to establish and maintain effective working relationships with peers, end users, vendor staff, and management.
- Analyze complex technical challenges and propose effective solutions.
Application Process : Interested candidates should submit their resume and cover letter to hr@dhakatech.us. Please include “Software Engineer Application” in the subject line.
Are You interested?
Empowering innovation, building futures-join our IT revolution!