1) What is micro services architecture and how does it work?
A)Microservices architecture is a way of designing software systems where an application is built as a collection of small, independent services that communicate with each other over a network.
Instead of having one big, tightly integrated monolithic codebase, you break it into smaller, self-contained components—each responsible for a specific business function.
How It Works
1. Decomposition into Services
Each micro service handles a single, well-defined capability (e.g., User Management, Payment Processing, Inventory Tracking).
Services are developed, deployed, and scaled independently.
2. Independent Deployment
You can update or replace one service without redeploying the whole system.
This enables faster releases and easier experimentation.
3. Communication Between Services
Services talk to each other using APIs (often HTTP/REST, gRPC, or messaging systems like Kafka or RabbitMQ).
Data sharing is typically minimal—each service usually has its own database to maintain independence.
4. Service Discovery & Load Balancing
A service registry helps services find each other dynamically.
Load balancers distribute requests across multiple instances.
5. Resilience & Scaling
If one service fails, others can still operate (assuming proper design).
You can scale only the services that need extra capacity instead of the whole system.
6. Technology Flexibility
Teams can choose different programming languages, frameworks, and storage solutions for each micro service (as long as they speak the agreed-upon communication protocols).
Example
Imagine an e-commerce platform:
Auth Service – Handles login and registration
Catalog Service – Manages product listings
Cart Service – Handles shopping cart operations
Payment Service – Processes transactions
Shipping Service – Calculates delivery estimates and handles logistics
If you need to improve payment speed, you just update the Payment Service without touching the rest.
Advantages
Scalability – Scale services independently
Faster Development – Parallel work by small teams
Resilience – Failure in one service won’t necessarily crash the whole app
Tech Diversity – Use the right tool for each job
Challenges
Complexity – More moving parts than monolithic architecture
Data Consistency – No single database, so distributed transactions are harder
Networking Overhead – More API calls = higher latency potential
DevOps Skills Needed – CI/CD pipelines, containerization (Docker, Kubernetes) often required
1. Steps to Implement Microservices Architecture
Step 1 – Identify Business Capabilities Break down your application into domains (e.g., Billing, Orders, Shipping, Inventory).
Each domain becomes a microservice.
Step 2 – Define Service Boundaries
Clearly define what each service is responsible for.
Use Domain-Driven Design (DDD) concepts like bounded contexts.
Step 3 – Choose Communication Mechanism
Synchronous: REST, gRPC (for request-response)
Asynchronous: Message queues like RabbitMQ, Kafka (for event-driven)
Step 4 – Assign Independent Databases
Each service should have its own database to avoid tight coupling.
Choose storage type based on needs (SQL, NoSQL, in-memory).
Step 5 – Containerize Services
Use Docker to package services.
Orchestrate them with Kubernetes or ECS.
Step 6 – Implement Service Discovery
Use Eureka, Consul, or Kubernetes DNS to let services find each other dynamically.
Step 7 – Ensure Observability
Logging (ELK stack, Splunk)
Metrics (Prometheus, Grafana)
Tracing (Jaeger, Zipkin)
Step 8 – Secure the Services
API Gateway for routing & authentication.
OAuth 2.0 / JWT for securing endpoints.
2. Approaches to Building Microservices
A. Top-Down (Greenfield Development)
Build a brand-new system using microservices from scratch.
Best when starting a new project with no legacy constraints.
B. Bottom-Up (Refactoring Monolith)
Start from your existing monolith and gradually extract services.
Typical pattern:
1. Identify a feature.
2. Extract it as a microservice.
3. Connect via API Gateway.
Example: Extract Order Processing from your monolith into a separate service.
C. Strangler Fig Pattern
Replace monolith modules one by one with microservices until nothing of the monolith remains.
Both old and new code run together during migration.
D. Hybrid Approach
Keep core features in the monolith, and build new features as microservices.
Gradually replace monolith parts.
Q) How to secure API
A) We secure APIs using authentication, authorization, and encryption.
Authentication is done via OAuth2/OpenID Connect with JWT tokens or API keys.
Authorization uses RBAC or scopes.
All traffic goes over HTTPS/TLS, with input validation and rate limiting at API Gateway.
We use centralized identity providers (e.g., Azure AD) and follow least privilege & key rotation best practices.
Circuit Breaker
A Circuit Breaker is a design pattern in micro Services that prevents repeated calls to a failing service.
It has three states: Closed (calls pass), Open (calls blocked after failures), and Half-Open (test limited calls before fully closing again).
It improves fault tolerance and avoids cascading failures.
Circuit Breaker States
Closed → Normal state, all requests pass through. Failures are counted.
Open → Triggered after threshold failures, all calls blocked for a cooldown period.
Half-Open → After cooldown, a few test calls are allowed to check if service is healthy.
OSLATE
OSLATE is a checklist for designing resilient microservices — it covers Observability, Security, Latency, Availability, Throughput, and Efficiency to ensure high performance, reliability, and scalability.
O – Observability: Logging, metrics, tracing (e.g., OpenTelemetry, Application Insights).
S – Security: Authentication, authorization, data encryption, API Gateway protection.
L – Latency: Optimize response times, use caching, asynchronous messaging.
A – Availability:Redundancy, failover, health checks, load balancing.
T – Throughput:Scale services horizontally, queue workloads, use batching.
E – Efficiency:Resource optimization, cost control, auto-scaling, right-sizing services.
Q) How to implement microservices and approaches
To implement microservices, I follow these steps and approaches:
1. Identify Services – Break the application into small, business-capable services (Domain-Driven Design).
2. Choose Communication – Use REST/gRPC for synchronous, and messaging (RabbitMQ, Azure Service Bus, Kafka) for asynchronous.
3. Database per Service – Each service has its own DB to ensure loose coupling.
4. API Gateway – Central entry point for routing, authentication, and rate limiting.
5. Security – OAuth2/OpenID Connect with JWT tokens, HTTPS, role-based access.
6. Resilience – Patterns like Circuit Breaker, Retry, Bulkhead (Polly in .NET).
7. Observability – Centralized logging, metrics, tracing (e.g., OpenTelemetry).
8. Deployment – Use containers (Docker, Kubernetes, Azure Container Apps) for scalability.
One-line wrap-up:
Identify services → isolate them → secure & connect them → monitor & deploy independently.
Q) Security Configuration in microservices/API
Security configuration in microservices covers authentication, authorization, and transport security.
We configure services to use OAuth2/OpenID Connect with JWT tokens, validate them at the API Gateway,
enforce HTTPS/TLS, and apply role- or scope-based authorization.
We also secure service-to-service calls with mTLS or managed identities, enable input validation, rate limiting, and use centralized secrets management (e.g., Azure Key Vault).
1. Authentication – Azure AD / Auth0 / IdentityServer, JWT token validation in middleware.
2. Authorization – RBAC/Scopes, `[Authorize]` attributes in .NET.
3. Transport Security – HTTPS/TLS, HSTS headers, disable weak protocols.
4. Secrets & Keys – Store in Azure Key Vault, not in code.
5. Service-to-Service Security – mTLS, managed identities, network restrictions.
6. API Gateway Policies – Validate tokens, apply IP whitelisting, throttle requests.
7. Monitoring – Security logging, intrusion detection, anomaly alerts.
Q) how to secure DB
- I secure a database by enforcing strong authentication and least-privilege authorization.
- encrypting data at rest with TDE and in transit with TLS, and isolating the database in a private network with firewall rules or VNET integration.
- I store credentials securely in a vault like Azure Key Vault, never in code, and use managed identities where possible.
- I also enable auditing, monitoring,and alerts for suspicious activity, keep the database patched, and maintain tested backups for disaster recovery.
Q) How database communicate DB in microservices
In microservices, each service has its own database to ensure loose coupling. Services never share the same DB directly.
If one service needs data from another, it communicates through APIs or messaging, not by querying the other service’s DB.
For consistency, we use patterns like Saga or CQRS, and for async data sharing, we use events via message brokers like Azure Service Bus or Kafka.
1. Saga Pattern
Saga manages distributed transactions in microservices by breaking them into smaller local transactions. Each service updates its DB and triggers the next step via events. If one step fails, compensating transactions roll back previous changes. Can be implemented as Choreography (event-driven) or Orchestration (central controller).
2. CQRS (Command Query Responsibility Segregation)
CQRS separates write (commands) and read (queries) models. Commands update the data store, while queries read from a separate read model optimized for retrieval. This improves scalability, performance, and allows event sourcing.
1. Sidecar Pattern
Sidecar Pattern runs helper components alongside the main service in the same deployment environment (like a pod in Kubernetes).
The sidecar adds capabilities like logging, monitoring, service discovery, or security without changing the main service code.
2. Aggregator Pattern
Aggregator Pattern collects data from multiple microservices, combines it, and sends a single unified response to the client. It reduces the number of client calls and hides service complexity.nt logic.
1. Delayed Queue (DQ)
A delayed queue holds messages for a set amount of time before making them available to consumers. It’s used for scenarios like scheduled tasks, retrying failed processes after a delay, or order cancellation grace periods. In Azure Service Bus, we can use `ScheduledEnqueueTimeUtc` to implement it; in RabbitMQ, we use a delayed exchange or TTL + dead-letter queue.
2. Dead Letter Queue (DLQ)
(I think you meant Deleterent as Dead Letter Queue — common in messaging systems)
A Dead Letter Queue stores messages that cannot be delivered or processed after multiple retries. This prevents losing data and allows investigation. In Azure Service Bus, each queue/topic has a built-in DLQ, accessed via a sub-queue path `/$DeadLetterQueue`.
No comments:
Post a Comment