Micro services using Service fabric where to place controllers











up vote
0
down vote

favorite












I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:




  1. Place the controllers in respective Micro Services, with Startup.cs in each micro-service.

  2. Place all controllers in a separate project and have them call the individual services.


I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.










share|improve this question
























  • Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
    – Oleg Karasik
    Nov 13 at 10:22















up vote
0
down vote

favorite












I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:




  1. Place the controllers in respective Micro Services, with Startup.cs in each micro-service.

  2. Place all controllers in a separate project and have them call the individual services.


I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.










share|improve this question
























  • Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
    – Oleg Karasik
    Nov 13 at 10:22













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:




  1. Place the controllers in respective Micro Services, with Startup.cs in each micro-service.

  2. Place all controllers in a separate project and have them call the individual services.


I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.










share|improve this question















I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:




  1. Place the controllers in respective Micro Services, with Startup.cs in each micro-service.

  2. Place all controllers in a separate project and have them call the individual services.


I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.







microservices azure-service-fabric asp.net-core-webapi






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 13 at 10:59









Oleg Karasik

4358




4358










asked Nov 13 at 8:22









bomaboom

12812




12812












  • Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
    – Oleg Karasik
    Nov 13 at 10:22


















  • Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
    – Oleg Karasik
    Nov 13 at 10:22
















Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
– Oleg Karasik
Nov 13 at 10:22




Can you provide more details about both options and how they are intended to be used? Currently it sounds like a general programming question rather than question related to Service Fabric.
– Oleg Karasik
Nov 13 at 10:22












2 Answers
2






active

oldest

votes

















up vote
0
down vote













This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.



First approach (APIs for each service isolated from each other):




  • the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.

  • Each service and it's APIs scales independently

  • This approach is better to deploy individual updates without taking down other microservices.

  • This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.

  • This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).


The second approach (Single API to forward the calls to right services):




  • You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.

  • The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.

  • This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.

  • This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.

  • This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.


In summary,
both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.






share|improve this answer






























    up vote
    0
    down vote













    In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.



    In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.



    If you would like to ask anything about this structure please feel free to ask.






    share|improve this answer





















    • If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
      – bomaboom
      Nov 22 at 9:41










    • We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
      – Tarik Tutuncu
      Nov 22 at 11:33











    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














     

    draft saved


    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53276677%2fmicro-services-using-service-fabric-where-to-place-controllers%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.



    First approach (APIs for each service isolated from each other):




    • the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.

    • Each service and it's APIs scales independently

    • This approach is better to deploy individual updates without taking down other microservices.

    • This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.

    • This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).


    The second approach (Single API to forward the calls to right services):




    • You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.

    • The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.

    • This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.

    • This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.

    • This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.


    In summary,
    both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.






    share|improve this answer



























      up vote
      0
      down vote













      This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.



      First approach (APIs for each service isolated from each other):




      • the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.

      • Each service and it's APIs scales independently

      • This approach is better to deploy individual updates without taking down other microservices.

      • This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.

      • This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).


      The second approach (Single API to forward the calls to right services):




      • You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.

      • The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.

      • This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.

      • This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.

      • This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.


      In summary,
      both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.






      share|improve this answer

























        up vote
        0
        down vote










        up vote
        0
        down vote









        This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.



        First approach (APIs for each service isolated from each other):




        • the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.

        • Each service and it's APIs scales independently

        • This approach is better to deploy individual updates without taking down other microservices.

        • This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.

        • This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).


        The second approach (Single API to forward the calls to right services):




        • You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.

        • The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.

        • This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.

        • This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.

        • This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.


        In summary,
        both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.






        share|improve this answer














        This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.



        First approach (APIs for each service isolated from each other):




        • the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.

        • Each service and it's APIs scales independently

        • This approach is better to deploy individual updates without taking down other microservices.

        • This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.

        • This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).


        The second approach (Single API to forward the calls to right services):




        • You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.

        • The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.

        • This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.

        • This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.

        • This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.


        In summary,
        both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 13 at 14:38

























        answered Nov 13 at 10:31









        Diego Mendes

        3,7551725




        3,7551725
























            up vote
            0
            down vote













            In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.



            In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.



            If you would like to ask anything about this structure please feel free to ask.






            share|improve this answer





















            • If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
              – bomaboom
              Nov 22 at 9:41










            • We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
              – Tarik Tutuncu
              Nov 22 at 11:33















            up vote
            0
            down vote













            In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.



            In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.



            If you would like to ask anything about this structure please feel free to ask.






            share|improve this answer





















            • If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
              – bomaboom
              Nov 22 at 9:41










            • We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
              – Tarik Tutuncu
              Nov 22 at 11:33













            up vote
            0
            down vote










            up vote
            0
            down vote









            In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.



            In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.



            If you would like to ask anything about this structure please feel free to ask.






            share|improve this answer












            In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.



            In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.



            If you would like to ask anything about this structure please feel free to ask.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Nov 20 at 18:57









            Tarik Tutuncu

            263




            263












            • If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
              – bomaboom
              Nov 22 at 9:41










            • We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
              – Tarik Tutuncu
              Nov 22 at 11:33


















            • If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
              – bomaboom
              Nov 22 at 9:41










            • We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
              – Tarik Tutuncu
              Nov 22 at 11:33
















            If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
            – bomaboom
            Nov 22 at 9:41




            If I understand correctly, you have controllers/API and interfaces in the same project for each Micro service, right?
            – bomaboom
            Nov 22 at 9:41












            We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
            – Tarik Tutuncu
            Nov 22 at 11:33




            We have a seperate project for each microservice. The interfaces for these microservices is in our Gateway project
            – Tarik Tutuncu
            Nov 22 at 11:33


















             

            draft saved


            draft discarded



















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53276677%2fmicro-services-using-service-fabric-where-to-place-controllers%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to change which sound is reproduced for terminal bell?

            Title Spacing in Bjornstrup Chapter, Removing Chapter Number From Contents

            Can I use Tabulator js library in my java Spring + Thymeleaf project?