Micro services using Service fabric where to place controllers
up vote
0
down vote
favorite
I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:
- Place the controllers in respective Micro Services, with
Startup.cs
in each micro-service. - Place all controllers in a separate project and have them call the individual services.
I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.
microservices azure-service-fabric asp.net-core-webapi
add a comment |
up vote
0
down vote
favorite
I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:
- Place the controllers in respective Micro Services, with
Startup.cs
in each micro-service. - Place all controllers in a separate project and have them call the individual services.
I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.
microservices azure-service-fabric asp.net-core-webapi
Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
– Oleg Karasik
Nov 13 at 10:22
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:
- Place the controllers in respective Micro Services, with
Startup.cs
in each micro-service. - Place all controllers in a separate project and have them call the individual services.
I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.
microservices azure-service-fabric asp.net-core-webapi
I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:
- Place the controllers in respective Micro Services, with
Startup.cs
in each micro-service. - Place all controllers in a separate project and have them call the individual services.
I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.
microservices azure-service-fabric asp.net-core-webapi
microservices azure-service-fabric asp.net-core-webapi
edited Nov 13 at 10:59
Oleg Karasik
4358
4358
asked Nov 13 at 8:22
bomaboom
12812
12812
Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
– Oleg Karasik
Nov 13 at 10:22
add a comment |
Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
– Oleg Karasik
Nov 13 at 10:22
Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
– Oleg Karasik
Nov 13 at 10:22
Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
– Oleg Karasik
Nov 13 at 10:22
add a comment |
2 Answers
2
active
oldest
votes
up vote
0
down vote
This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.
First approach (APIs for each service isolated from each other):
- the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.
- Each service and it's APIs scales independently
- This approach is better to deploy individual updates without taking down other microservices.
- This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.
- This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).
The second approach (Single API to forward the calls to right services):
- You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.
- The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.
- This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.
- This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.
- This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.
In summary,
both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.
add a comment |
up vote
0
down vote
In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.
In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.
If you would like to ask anything about this structure please feel free to ask.
If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
– bomaboom
Nov 22 at 9:41
We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
– Tarik Tutuncu
Nov 22 at 11:33
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.
First approach (APIs for each service isolated from each other):
- the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.
- Each service and it's APIs scales independently
- This approach is better to deploy individual updates without taking down other microservices.
- This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.
- This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).
The second approach (Single API to forward the calls to right services):
- You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.
- The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.
- This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.
- This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.
- This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.
In summary,
both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.
add a comment |
up vote
0
down vote
This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.
First approach (APIs for each service isolated from each other):
- the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.
- Each service and it's APIs scales independently
- This approach is better to deploy individual updates without taking down other microservices.
- This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.
- This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).
The second approach (Single API to forward the calls to right services):
- You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.
- The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.
- This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.
- This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.
- This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.
In summary,
both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.
add a comment |
up vote
0
down vote
up vote
0
down vote
This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.
First approach (APIs for each service isolated from each other):
- the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.
- Each service and it's APIs scales independently
- This approach is better to deploy individual updates without taking down other microservices.
- This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.
- This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).
The second approach (Single API to forward the calls to right services):
- You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.
- The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.
- This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.
- This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.
- This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.
In summary,
both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.
This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.
First approach (APIs for each service isolated from each other):
- the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.
- Each service and it's APIs scales independently
- This approach is better to deploy individual updates without taking down other microservices.
- This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.
- This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).
The second approach (Single API to forward the calls to right services):
- You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.
- The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.
- This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.
- This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.
- This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.
In summary,
both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.
edited Nov 13 at 14:38
answered Nov 13 at 10:31
Diego Mendes
3,7551725
3,7551725
add a comment |
add a comment |
up vote
0
down vote
In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.
In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.
If you would like to ask anything about this structure please feel free to ask.
If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
– bomaboom
Nov 22 at 9:41
We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
– Tarik Tutuncu
Nov 22 at 11:33
add a comment |
up vote
0
down vote
In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.
In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.
If you would like to ask anything about this structure please feel free to ask.
If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
– bomaboom
Nov 22 at 9:41
We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
– Tarik Tutuncu
Nov 22 at 11:33
add a comment |
up vote
0
down vote
up vote
0
down vote
In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.
In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.
If you would like to ask anything about this structure please feel free to ask.
In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.
In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.
If you would like to ask anything about this structure please feel free to ask.
answered Nov 20 at 18:57
Tarik Tutuncu
263
263
If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
– bomaboom
Nov 22 at 9:41
We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
– Tarik Tutuncu
Nov 22 at 11:33
add a comment |
If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
– bomaboom
Nov 22 at 9:41
We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
– Tarik Tutuncu
Nov 22 at 11:33
If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
– bomaboom
Nov 22 at 9:41
If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
– bomaboom
Nov 22 at 9:41
We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
– Tarik Tutuncu
Nov 22 at 11:33
We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
– Tarik Tutuncu
Nov 22 at 11:33
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53276677%2fmicro-services-using-service-fabric-where-to-place-controllers%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
– Oleg Karasik
Nov 13 at 10:22