Strategies for time series forecasting for 2000 different products?












4












$begingroup$


First of all, I realise that my question is very broad and that it may be hard to answer this question because of it.



Do you have any advice on how to approach a 'problem' where you need to make forecasts/predictions for 2000+ different products? In other words, each product requires a different forecast/prediction. I have 2 years of historical data on week level (i.e. demand per week per product).



I need to do this in a short time period: I have about a week to do this, hence I am looking for ways that I can quickly make relatively good prediction models. Creating a model for each product and inspecting its performance closely, one by one, would be too time-consuming.



I thought of segmenting the products based on the variance, so that I can employ simple models for products that have a low variance. While this is probably not ideal, it would be a quick way to narrow down the number of models I need to create.



It would be greatly appreciated if you have any practical advice for me on approaching this problem.










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    Are those similar products? You might benefit from searching this sitr for hierarchical forecasting
    $endgroup$
    – kjetil b halvorsen
    Jan 26 at 17:32
















4












$begingroup$


First of all, I realise that my question is very broad and that it may be hard to answer this question because of it.



Do you have any advice on how to approach a 'problem' where you need to make forecasts/predictions for 2000+ different products? In other words, each product requires a different forecast/prediction. I have 2 years of historical data on week level (i.e. demand per week per product).



I need to do this in a short time period: I have about a week to do this, hence I am looking for ways that I can quickly make relatively good prediction models. Creating a model for each product and inspecting its performance closely, one by one, would be too time-consuming.



I thought of segmenting the products based on the variance, so that I can employ simple models for products that have a low variance. While this is probably not ideal, it would be a quick way to narrow down the number of models I need to create.



It would be greatly appreciated if you have any practical advice for me on approaching this problem.










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    Are those similar products? You might benefit from searching this sitr for hierarchical forecasting
    $endgroup$
    – kjetil b halvorsen
    Jan 26 at 17:32














4












4








4


2



$begingroup$


First of all, I realise that my question is very broad and that it may be hard to answer this question because of it.



Do you have any advice on how to approach a 'problem' where you need to make forecasts/predictions for 2000+ different products? In other words, each product requires a different forecast/prediction. I have 2 years of historical data on week level (i.e. demand per week per product).



I need to do this in a short time period: I have about a week to do this, hence I am looking for ways that I can quickly make relatively good prediction models. Creating a model for each product and inspecting its performance closely, one by one, would be too time-consuming.



I thought of segmenting the products based on the variance, so that I can employ simple models for products that have a low variance. While this is probably not ideal, it would be a quick way to narrow down the number of models I need to create.



It would be greatly appreciated if you have any practical advice for me on approaching this problem.










share|cite|improve this question









$endgroup$




First of all, I realise that my question is very broad and that it may be hard to answer this question because of it.



Do you have any advice on how to approach a 'problem' where you need to make forecasts/predictions for 2000+ different products? In other words, each product requires a different forecast/prediction. I have 2 years of historical data on week level (i.e. demand per week per product).



I need to do this in a short time period: I have about a week to do this, hence I am looking for ways that I can quickly make relatively good prediction models. Creating a model for each product and inspecting its performance closely, one by one, would be too time-consuming.



I thought of segmenting the products based on the variance, so that I can employ simple models for products that have a low variance. While this is probably not ideal, it would be a quick way to narrow down the number of models I need to create.



It would be greatly appreciated if you have any practical advice for me on approaching this problem.







time-series forecasting segmentation






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 26 at 16:12









AmonetAmonet

304112




304112








  • 1




    $begingroup$
    Are those similar products? You might benefit from searching this sitr for hierarchical forecasting
    $endgroup$
    – kjetil b halvorsen
    Jan 26 at 17:32














  • 1




    $begingroup$
    Are those similar products? You might benefit from searching this sitr for hierarchical forecasting
    $endgroup$
    – kjetil b halvorsen
    Jan 26 at 17:32








1




1




$begingroup$
Are those similar products? You might benefit from searching this sitr for hierarchical forecasting
$endgroup$
– kjetil b halvorsen
Jan 26 at 17:32




$begingroup$
Are those similar products? You might benefit from searching this sitr for hierarchical forecasting
$endgroup$
– kjetil b halvorsen
Jan 26 at 17:32










3 Answers
3






active

oldest

votes


















5












$begingroup$

A follow up to @StephanKolassa 's answer:




  • I concur with Stephan that ETS() from the forecast package in R is probably your best and fastest choice. If ETS doesn't give good results, you might want also want to use Facebook's Prophet package (Auto.arima is easy to use, but two years of weekly data is bordering not enough data for an ARIMA model in my experience). Personally I have found Prophet to be easier to user when you have promotions and holiday event data available, otherwise ETS() might work better. Your real challenge is more of a coding challenge of how to efficiently iterate it over a large number of time series. You can check this response for more details on how to automate forecast generation.


  • In demand forecasting, some form of hierarchical forecasting is frequently performed. I.e you have 2000 products and you need a separate forecast for each separate product, but there are similarities between products that might help with the forecasting. You want to find some way of grouping the product together along a product hierarchy and then use hierarchical forecasting to improve accuracy. Since you are looking for forecasts at the individual product level, look at trying the top-down hierarchical approach.


  • Something a little bit more farfetched, but I would like call it out: Amazon and Uber use neural networks for this type of problem, where instead of having a separate forecast for each product/time series, they use one gigantic recurrent neural network to forecast all the time series in bulk. Note that they still end up with individual forecasts for each product (in Uber's case it is traffic/demand per city as opposed to products), they are just using a large model (an LSTM deep learning model) to do it all at once. The idea is similar in spirit to hierarchical forecasting in the sense that the neural network learns from the similarities between the histories of different products to come up with better forecasts. The Uber team has made some of their code available (through the M4 competition Github repositories), however it is C++ code (not exactly the favorite language of the stats crowd). Amazon's approach is not open source and you have to use their paid Amazon Forecast service to do the forecasts.





With regards to your second comment: You need to differentiate between forecasting sales and forecasting demand. Demand is unconstrained, if suddenly an item is popular and your customers want 200 units, it doesn't matter that you have only 50 units on hand, your demand is still going to be 200 units.



In practice it is very difficult to observe demand directly, so we use sales as proxy for demand. This has a problem because it doesn't account for situations where a customer wanted to purchase a product but it was unavailable. To address it, along with the historical sales data, information about inventory levels and stock outs is either directly included in a model or used to preprocess the time series prior to generating a model for forecasting.



Typically an unconstrained forecast is generated first by a forecast engine and then passed on to a planning system which then adds the constrains you mention (i.e demand is 500 units but only 300 units are available) along with other constraints (safety stock, presentation stock, budgetary constraints, plans for promotions or introductions of new products etc...) - however this falls under the general rubric of planning and inventory management, not forecasting per se.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    @Amonet "I would make a forecast on product family level and then disaggregrate to product level, correct?" Yes.
    $endgroup$
    – Skander H.
    Jan 26 at 23:13






  • 1




    $begingroup$
    +1, all extremely good points. Regarding hierarchical forecasting, I am a big fan of optimal reconciliation, which I and others have repeatedly found to outperform top-down and bottom-up on all levels of the hierarchy. Plus, it's at heart an optimization algorithm, so one can take constraints into account. (For instance, if some series have low volume, the unconstrained reconciliation can lead to negative forecasts.) I agree, though, that one should aim at uncensored demand forecasts...
    $endgroup$
    – Stephan Kolassa
    Jan 27 at 5:48






  • 2




    $begingroup$
    ... I would always recommend to start with simple forecasting methods first, which can be surprisingly hard to beat. See also here.
    $endgroup$
    – Stephan Kolassa
    Jan 27 at 5:53






  • 1




    $begingroup$
    @usεr11852: two years are just two cycles. In seasonal differencing, we lose one cycle. So seasonal ARIMA loses half its data just through the differencing. I would not use seasonal ARIMA with less than five cycles' worth of data. ...
    $endgroup$
    – Stephan Kolassa
    Jan 29 at 14:44






  • 1




    $begingroup$
    @usεr11852: this reminds me of some analyses I did where the bottom series were adjusted "too much", relatively speaking, because adjustments are more-or-less balanced in absolute terms, not in percentage terms. I then used mgcv::pcls() for the reconciliation, feeding the summation matrix in by hand. This had two advantages: (1) it allows you to set box constraints, e.g., to ensure reconciliated forecasts are non-negative, (2) it allows you to weight the adjustments, so I just used the inverse of each series' historical average as a weight, which addressed the adjustment problem.
    $endgroup$
    – Stephan Kolassa
    Jan 30 at 16:53



















6












$begingroup$

We will only be able to give you very general advice.




  • Are there any strong drivers, like promotions or calendar events, or seasonality, trends or lifecycles? If so, include them in your models. For instance, you could regress sales on promotions, then potentially model residuals (using exponential smoothing or ARIMA).

  • There are software packages that do a reasonably good job at fitting multiple time series models to a series. You can then simply iterate over your 2000 series, which should not take much more runtime than a cup of coffee. I particularly recommend the ets() function in the forecast package in R. (Less so the auto.arima() function for weekly data.

  • At least skim a forecasting textbook, e.g., this one. It uses the forecast package I recommend above.

  • What is your final objective? Do you want an unbiased forecast? Then assess point forecasts using the MSE. Will your bonus depend on the MAPE? Then this list of the problems of the MAPE may be helpful. Do you need forecasts to set safety amounts? Then you need quantile forecasts, not mean predictions. (The functions in the forecast package can give you those.)


If you have more specific questions, do post them at CV.






share|cite|improve this answer









$endgroup$









  • 4




    $begingroup$
    @ŁukaszGrad: if you have worked your way through FPP2, our book won't tell you much new. Ord et al.'s Principles of Business Forecasting (2nd ed.) goes into more depth (I reviewed it here if you have access). ...
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 16:47






  • 3




    $begingroup$
    ... You might profit from looking at the IIF, maybe read its publication Foresight or attend one of its conferences, either the ISF, which will take place this year in June in Thessaloniki, or the Foresight Practitioner Conference, this year in November at the SAS campus in Cary, NC, depending on where you are. The ISF is somewhat more academically oriented, but recently, I'd say about 33% of attendees came from industry, and there usually is a practitioner track.
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 16:49






  • 2




    $begingroup$
    (Full disclosure: I am involved with all of these, so take my recommendations with a large grain of salt. If you do attend one of the conferences, find me and say hi!)
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 16:50






  • 1




    $begingroup$
    @SkanderH: use the forecast() command on your fitted model (i.e., the output of ets() or auto.arima()), and specify the level parameter. See ?forecast.ets and ?forecast.Arima (note the capitalization).
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 21:18






  • 1




    $begingroup$
    @StephanKolassa I accepted the other answer, as it's a follow-up on your answer and people are therefore more inclined to read your helpful advice also.
    $endgroup$
    – Amonet
    Jan 30 at 7:44



















0












$begingroup$

Segmenting based on the variance of the original series makes no sense to me as the best model should be invariant to scale. Consider a series ..model it and then multiply each value in the time series by 1000 .



In terms of mass producing equations that may have both deterministic structure (pulses/level shift/local time trends ) OR either auto-regressive seasonality and arima structure you have to run a computer-based script . Beware of simple auto arima solutions that assume no deterministic structure OR fixed assumptions about same.






share|cite|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "65"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f389291%2fstrategies-for-time-series-forecasting-for-2000-different-products%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    5












    $begingroup$

    A follow up to @StephanKolassa 's answer:




    • I concur with Stephan that ETS() from the forecast package in R is probably your best and fastest choice. If ETS doesn't give good results, you might want also want to use Facebook's Prophet package (Auto.arima is easy to use, but two years of weekly data is bordering not enough data for an ARIMA model in my experience). Personally I have found Prophet to be easier to user when you have promotions and holiday event data available, otherwise ETS() might work better. Your real challenge is more of a coding challenge of how to efficiently iterate it over a large number of time series. You can check this response for more details on how to automate forecast generation.


    • In demand forecasting, some form of hierarchical forecasting is frequently performed. I.e you have 2000 products and you need a separate forecast for each separate product, but there are similarities between products that might help with the forecasting. You want to find some way of grouping the product together along a product hierarchy and then use hierarchical forecasting to improve accuracy. Since you are looking for forecasts at the individual product level, look at trying the top-down hierarchical approach.


    • Something a little bit more farfetched, but I would like call it out: Amazon and Uber use neural networks for this type of problem, where instead of having a separate forecast for each product/time series, they use one gigantic recurrent neural network to forecast all the time series in bulk. Note that they still end up with individual forecasts for each product (in Uber's case it is traffic/demand per city as opposed to products), they are just using a large model (an LSTM deep learning model) to do it all at once. The idea is similar in spirit to hierarchical forecasting in the sense that the neural network learns from the similarities between the histories of different products to come up with better forecasts. The Uber team has made some of their code available (through the M4 competition Github repositories), however it is C++ code (not exactly the favorite language of the stats crowd). Amazon's approach is not open source and you have to use their paid Amazon Forecast service to do the forecasts.





    With regards to your second comment: You need to differentiate between forecasting sales and forecasting demand. Demand is unconstrained, if suddenly an item is popular and your customers want 200 units, it doesn't matter that you have only 50 units on hand, your demand is still going to be 200 units.



    In practice it is very difficult to observe demand directly, so we use sales as proxy for demand. This has a problem because it doesn't account for situations where a customer wanted to purchase a product but it was unavailable. To address it, along with the historical sales data, information about inventory levels and stock outs is either directly included in a model or used to preprocess the time series prior to generating a model for forecasting.



    Typically an unconstrained forecast is generated first by a forecast engine and then passed on to a planning system which then adds the constrains you mention (i.e demand is 500 units but only 300 units are available) along with other constraints (safety stock, presentation stock, budgetary constraints, plans for promotions or introductions of new products etc...) - however this falls under the general rubric of planning and inventory management, not forecasting per se.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      @Amonet "I would make a forecast on product family level and then disaggregrate to product level, correct?" Yes.
      $endgroup$
      – Skander H.
      Jan 26 at 23:13






    • 1




      $begingroup$
      +1, all extremely good points. Regarding hierarchical forecasting, I am a big fan of optimal reconciliation, which I and others have repeatedly found to outperform top-down and bottom-up on all levels of the hierarchy. Plus, it's at heart an optimization algorithm, so one can take constraints into account. (For instance, if some series have low volume, the unconstrained reconciliation can lead to negative forecasts.) I agree, though, that one should aim at uncensored demand forecasts...
      $endgroup$
      – Stephan Kolassa
      Jan 27 at 5:48






    • 2




      $begingroup$
      ... I would always recommend to start with simple forecasting methods first, which can be surprisingly hard to beat. See also here.
      $endgroup$
      – Stephan Kolassa
      Jan 27 at 5:53






    • 1




      $begingroup$
      @usεr11852: two years are just two cycles. In seasonal differencing, we lose one cycle. So seasonal ARIMA loses half its data just through the differencing. I would not use seasonal ARIMA with less than five cycles' worth of data. ...
      $endgroup$
      – Stephan Kolassa
      Jan 29 at 14:44






    • 1




      $begingroup$
      @usεr11852: this reminds me of some analyses I did where the bottom series were adjusted "too much", relatively speaking, because adjustments are more-or-less balanced in absolute terms, not in percentage terms. I then used mgcv::pcls() for the reconciliation, feeding the summation matrix in by hand. This had two advantages: (1) it allows you to set box constraints, e.g., to ensure reconciliated forecasts are non-negative, (2) it allows you to weight the adjustments, so I just used the inverse of each series' historical average as a weight, which addressed the adjustment problem.
      $endgroup$
      – Stephan Kolassa
      Jan 30 at 16:53
















    5












    $begingroup$

    A follow up to @StephanKolassa 's answer:




    • I concur with Stephan that ETS() from the forecast package in R is probably your best and fastest choice. If ETS doesn't give good results, you might want also want to use Facebook's Prophet package (Auto.arima is easy to use, but two years of weekly data is bordering not enough data for an ARIMA model in my experience). Personally I have found Prophet to be easier to user when you have promotions and holiday event data available, otherwise ETS() might work better. Your real challenge is more of a coding challenge of how to efficiently iterate it over a large number of time series. You can check this response for more details on how to automate forecast generation.


    • In demand forecasting, some form of hierarchical forecasting is frequently performed. I.e you have 2000 products and you need a separate forecast for each separate product, but there are similarities between products that might help with the forecasting. You want to find some way of grouping the product together along a product hierarchy and then use hierarchical forecasting to improve accuracy. Since you are looking for forecasts at the individual product level, look at trying the top-down hierarchical approach.


    • Something a little bit more farfetched, but I would like call it out: Amazon and Uber use neural networks for this type of problem, where instead of having a separate forecast for each product/time series, they use one gigantic recurrent neural network to forecast all the time series in bulk. Note that they still end up with individual forecasts for each product (in Uber's case it is traffic/demand per city as opposed to products), they are just using a large model (an LSTM deep learning model) to do it all at once. The idea is similar in spirit to hierarchical forecasting in the sense that the neural network learns from the similarities between the histories of different products to come up with better forecasts. The Uber team has made some of their code available (through the M4 competition Github repositories), however it is C++ code (not exactly the favorite language of the stats crowd). Amazon's approach is not open source and you have to use their paid Amazon Forecast service to do the forecasts.





    With regards to your second comment: You need to differentiate between forecasting sales and forecasting demand. Demand is unconstrained, if suddenly an item is popular and your customers want 200 units, it doesn't matter that you have only 50 units on hand, your demand is still going to be 200 units.



    In practice it is very difficult to observe demand directly, so we use sales as proxy for demand. This has a problem because it doesn't account for situations where a customer wanted to purchase a product but it was unavailable. To address it, along with the historical sales data, information about inventory levels and stock outs is either directly included in a model or used to preprocess the time series prior to generating a model for forecasting.



    Typically an unconstrained forecast is generated first by a forecast engine and then passed on to a planning system which then adds the constrains you mention (i.e demand is 500 units but only 300 units are available) along with other constraints (safety stock, presentation stock, budgetary constraints, plans for promotions or introductions of new products etc...) - however this falls under the general rubric of planning and inventory management, not forecasting per se.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      @Amonet "I would make a forecast on product family level and then disaggregrate to product level, correct?" Yes.
      $endgroup$
      – Skander H.
      Jan 26 at 23:13






    • 1




      $begingroup$
      +1, all extremely good points. Regarding hierarchical forecasting, I am a big fan of optimal reconciliation, which I and others have repeatedly found to outperform top-down and bottom-up on all levels of the hierarchy. Plus, it's at heart an optimization algorithm, so one can take constraints into account. (For instance, if some series have low volume, the unconstrained reconciliation can lead to negative forecasts.) I agree, though, that one should aim at uncensored demand forecasts...
      $endgroup$
      – Stephan Kolassa
      Jan 27 at 5:48






    • 2




      $begingroup$
      ... I would always recommend to start with simple forecasting methods first, which can be surprisingly hard to beat. See also here.
      $endgroup$
      – Stephan Kolassa
      Jan 27 at 5:53






    • 1




      $begingroup$
      @usεr11852: two years are just two cycles. In seasonal differencing, we lose one cycle. So seasonal ARIMA loses half its data just through the differencing. I would not use seasonal ARIMA with less than five cycles' worth of data. ...
      $endgroup$
      – Stephan Kolassa
      Jan 29 at 14:44






    • 1




      $begingroup$
      @usεr11852: this reminds me of some analyses I did where the bottom series were adjusted "too much", relatively speaking, because adjustments are more-or-less balanced in absolute terms, not in percentage terms. I then used mgcv::pcls() for the reconciliation, feeding the summation matrix in by hand. This had two advantages: (1) it allows you to set box constraints, e.g., to ensure reconciliated forecasts are non-negative, (2) it allows you to weight the adjustments, so I just used the inverse of each series' historical average as a weight, which addressed the adjustment problem.
      $endgroup$
      – Stephan Kolassa
      Jan 30 at 16:53














    5












    5








    5





    $begingroup$

    A follow up to @StephanKolassa 's answer:




    • I concur with Stephan that ETS() from the forecast package in R is probably your best and fastest choice. If ETS doesn't give good results, you might want also want to use Facebook's Prophet package (Auto.arima is easy to use, but two years of weekly data is bordering not enough data for an ARIMA model in my experience). Personally I have found Prophet to be easier to user when you have promotions and holiday event data available, otherwise ETS() might work better. Your real challenge is more of a coding challenge of how to efficiently iterate it over a large number of time series. You can check this response for more details on how to automate forecast generation.


    • In demand forecasting, some form of hierarchical forecasting is frequently performed. I.e you have 2000 products and you need a separate forecast for each separate product, but there are similarities between products that might help with the forecasting. You want to find some way of grouping the product together along a product hierarchy and then use hierarchical forecasting to improve accuracy. Since you are looking for forecasts at the individual product level, look at trying the top-down hierarchical approach.


    • Something a little bit more farfetched, but I would like call it out: Amazon and Uber use neural networks for this type of problem, where instead of having a separate forecast for each product/time series, they use one gigantic recurrent neural network to forecast all the time series in bulk. Note that they still end up with individual forecasts for each product (in Uber's case it is traffic/demand per city as opposed to products), they are just using a large model (an LSTM deep learning model) to do it all at once. The idea is similar in spirit to hierarchical forecasting in the sense that the neural network learns from the similarities between the histories of different products to come up with better forecasts. The Uber team has made some of their code available (through the M4 competition Github repositories), however it is C++ code (not exactly the favorite language of the stats crowd). Amazon's approach is not open source and you have to use their paid Amazon Forecast service to do the forecasts.





    With regards to your second comment: You need to differentiate between forecasting sales and forecasting demand. Demand is unconstrained, if suddenly an item is popular and your customers want 200 units, it doesn't matter that you have only 50 units on hand, your demand is still going to be 200 units.



    In practice it is very difficult to observe demand directly, so we use sales as proxy for demand. This has a problem because it doesn't account for situations where a customer wanted to purchase a product but it was unavailable. To address it, along with the historical sales data, information about inventory levels and stock outs is either directly included in a model or used to preprocess the time series prior to generating a model for forecasting.



    Typically an unconstrained forecast is generated first by a forecast engine and then passed on to a planning system which then adds the constrains you mention (i.e demand is 500 units but only 300 units are available) along with other constraints (safety stock, presentation stock, budgetary constraints, plans for promotions or introductions of new products etc...) - however this falls under the general rubric of planning and inventory management, not forecasting per se.






    share|cite|improve this answer











    $endgroup$



    A follow up to @StephanKolassa 's answer:




    • I concur with Stephan that ETS() from the forecast package in R is probably your best and fastest choice. If ETS doesn't give good results, you might want also want to use Facebook's Prophet package (Auto.arima is easy to use, but two years of weekly data is bordering not enough data for an ARIMA model in my experience). Personally I have found Prophet to be easier to user when you have promotions and holiday event data available, otherwise ETS() might work better. Your real challenge is more of a coding challenge of how to efficiently iterate it over a large number of time series. You can check this response for more details on how to automate forecast generation.


    • In demand forecasting, some form of hierarchical forecasting is frequently performed. I.e you have 2000 products and you need a separate forecast for each separate product, but there are similarities between products that might help with the forecasting. You want to find some way of grouping the product together along a product hierarchy and then use hierarchical forecasting to improve accuracy. Since you are looking for forecasts at the individual product level, look at trying the top-down hierarchical approach.


    • Something a little bit more farfetched, but I would like call it out: Amazon and Uber use neural networks for this type of problem, where instead of having a separate forecast for each product/time series, they use one gigantic recurrent neural network to forecast all the time series in bulk. Note that they still end up with individual forecasts for each product (in Uber's case it is traffic/demand per city as opposed to products), they are just using a large model (an LSTM deep learning model) to do it all at once. The idea is similar in spirit to hierarchical forecasting in the sense that the neural network learns from the similarities between the histories of different products to come up with better forecasts. The Uber team has made some of their code available (through the M4 competition Github repositories), however it is C++ code (not exactly the favorite language of the stats crowd). Amazon's approach is not open source and you have to use their paid Amazon Forecast service to do the forecasts.





    With regards to your second comment: You need to differentiate between forecasting sales and forecasting demand. Demand is unconstrained, if suddenly an item is popular and your customers want 200 units, it doesn't matter that you have only 50 units on hand, your demand is still going to be 200 units.



    In practice it is very difficult to observe demand directly, so we use sales as proxy for demand. This has a problem because it doesn't account for situations where a customer wanted to purchase a product but it was unavailable. To address it, along with the historical sales data, information about inventory levels and stock outs is either directly included in a model or used to preprocess the time series prior to generating a model for forecasting.



    Typically an unconstrained forecast is generated first by a forecast engine and then passed on to a planning system which then adds the constrains you mention (i.e demand is 500 units but only 300 units are available) along with other constraints (safety stock, presentation stock, budgetary constraints, plans for promotions or introductions of new products etc...) - however this falls under the general rubric of planning and inventory management, not forecasting per se.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Jan 27 at 6:31

























    answered Jan 26 at 21:38









    Skander H.Skander H.

    3,6431128




    3,6431128








    • 1




      $begingroup$
      @Amonet "I would make a forecast on product family level and then disaggregrate to product level, correct?" Yes.
      $endgroup$
      – Skander H.
      Jan 26 at 23:13






    • 1




      $begingroup$
      +1, all extremely good points. Regarding hierarchical forecasting, I am a big fan of optimal reconciliation, which I and others have repeatedly found to outperform top-down and bottom-up on all levels of the hierarchy. Plus, it's at heart an optimization algorithm, so one can take constraints into account. (For instance, if some series have low volume, the unconstrained reconciliation can lead to negative forecasts.) I agree, though, that one should aim at uncensored demand forecasts...
      $endgroup$
      – Stephan Kolassa
      Jan 27 at 5:48






    • 2




      $begingroup$
      ... I would always recommend to start with simple forecasting methods first, which can be surprisingly hard to beat. See also here.
      $endgroup$
      – Stephan Kolassa
      Jan 27 at 5:53






    • 1




      $begingroup$
      @usεr11852: two years are just two cycles. In seasonal differencing, we lose one cycle. So seasonal ARIMA loses half its data just through the differencing. I would not use seasonal ARIMA with less than five cycles' worth of data. ...
      $endgroup$
      – Stephan Kolassa
      Jan 29 at 14:44






    • 1




      $begingroup$
      @usεr11852: this reminds me of some analyses I did where the bottom series were adjusted "too much", relatively speaking, because adjustments are more-or-less balanced in absolute terms, not in percentage terms. I then used mgcv::pcls() for the reconciliation, feeding the summation matrix in by hand. This had two advantages: (1) it allows you to set box constraints, e.g., to ensure reconciliated forecasts are non-negative, (2) it allows you to weight the adjustments, so I just used the inverse of each series' historical average as a weight, which addressed the adjustment problem.
      $endgroup$
      – Stephan Kolassa
      Jan 30 at 16:53














    • 1




      $begingroup$
      @Amonet "I would make a forecast on product family level and then disaggregrate to product level, correct?" Yes.
      $endgroup$
      – Skander H.
      Jan 26 at 23:13






    • 1




      $begingroup$
      +1, all extremely good points. Regarding hierarchical forecasting, I am a big fan of optimal reconciliation, which I and others have repeatedly found to outperform top-down and bottom-up on all levels of the hierarchy. Plus, it's at heart an optimization algorithm, so one can take constraints into account. (For instance, if some series have low volume, the unconstrained reconciliation can lead to negative forecasts.) I agree, though, that one should aim at uncensored demand forecasts...
      $endgroup$
      – Stephan Kolassa
      Jan 27 at 5:48






    • 2




      $begingroup$
      ... I would always recommend to start with simple forecasting methods first, which can be surprisingly hard to beat. See also here.
      $endgroup$
      – Stephan Kolassa
      Jan 27 at 5:53






    • 1




      $begingroup$
      @usεr11852: two years are just two cycles. In seasonal differencing, we lose one cycle. So seasonal ARIMA loses half its data just through the differencing. I would not use seasonal ARIMA with less than five cycles' worth of data. ...
      $endgroup$
      – Stephan Kolassa
      Jan 29 at 14:44






    • 1




      $begingroup$
      @usεr11852: this reminds me of some analyses I did where the bottom series were adjusted "too much", relatively speaking, because adjustments are more-or-less balanced in absolute terms, not in percentage terms. I then used mgcv::pcls() for the reconciliation, feeding the summation matrix in by hand. This had two advantages: (1) it allows you to set box constraints, e.g., to ensure reconciliated forecasts are non-negative, (2) it allows you to weight the adjustments, so I just used the inverse of each series' historical average as a weight, which addressed the adjustment problem.
      $endgroup$
      – Stephan Kolassa
      Jan 30 at 16:53








    1




    1




    $begingroup$
    @Amonet "I would make a forecast on product family level and then disaggregrate to product level, correct?" Yes.
    $endgroup$
    – Skander H.
    Jan 26 at 23:13




    $begingroup$
    @Amonet "I would make a forecast on product family level and then disaggregrate to product level, correct?" Yes.
    $endgroup$
    – Skander H.
    Jan 26 at 23:13




    1




    1




    $begingroup$
    +1, all extremely good points. Regarding hierarchical forecasting, I am a big fan of optimal reconciliation, which I and others have repeatedly found to outperform top-down and bottom-up on all levels of the hierarchy. Plus, it's at heart an optimization algorithm, so one can take constraints into account. (For instance, if some series have low volume, the unconstrained reconciliation can lead to negative forecasts.) I agree, though, that one should aim at uncensored demand forecasts...
    $endgroup$
    – Stephan Kolassa
    Jan 27 at 5:48




    $begingroup$
    +1, all extremely good points. Regarding hierarchical forecasting, I am a big fan of optimal reconciliation, which I and others have repeatedly found to outperform top-down and bottom-up on all levels of the hierarchy. Plus, it's at heart an optimization algorithm, so one can take constraints into account. (For instance, if some series have low volume, the unconstrained reconciliation can lead to negative forecasts.) I agree, though, that one should aim at uncensored demand forecasts...
    $endgroup$
    – Stephan Kolassa
    Jan 27 at 5:48




    2




    2




    $begingroup$
    ... I would always recommend to start with simple forecasting methods first, which can be surprisingly hard to beat. See also here.
    $endgroup$
    – Stephan Kolassa
    Jan 27 at 5:53




    $begingroup$
    ... I would always recommend to start with simple forecasting methods first, which can be surprisingly hard to beat. See also here.
    $endgroup$
    – Stephan Kolassa
    Jan 27 at 5:53




    1




    1




    $begingroup$
    @usεr11852: two years are just two cycles. In seasonal differencing, we lose one cycle. So seasonal ARIMA loses half its data just through the differencing. I would not use seasonal ARIMA with less than five cycles' worth of data. ...
    $endgroup$
    – Stephan Kolassa
    Jan 29 at 14:44




    $begingroup$
    @usεr11852: two years are just two cycles. In seasonal differencing, we lose one cycle. So seasonal ARIMA loses half its data just through the differencing. I would not use seasonal ARIMA with less than five cycles' worth of data. ...
    $endgroup$
    – Stephan Kolassa
    Jan 29 at 14:44




    1




    1




    $begingroup$
    @usεr11852: this reminds me of some analyses I did where the bottom series were adjusted "too much", relatively speaking, because adjustments are more-or-less balanced in absolute terms, not in percentage terms. I then used mgcv::pcls() for the reconciliation, feeding the summation matrix in by hand. This had two advantages: (1) it allows you to set box constraints, e.g., to ensure reconciliated forecasts are non-negative, (2) it allows you to weight the adjustments, so I just used the inverse of each series' historical average as a weight, which addressed the adjustment problem.
    $endgroup$
    – Stephan Kolassa
    Jan 30 at 16:53




    $begingroup$
    @usεr11852: this reminds me of some analyses I did where the bottom series were adjusted "too much", relatively speaking, because adjustments are more-or-less balanced in absolute terms, not in percentage terms. I then used mgcv::pcls() for the reconciliation, feeding the summation matrix in by hand. This had two advantages: (1) it allows you to set box constraints, e.g., to ensure reconciliated forecasts are non-negative, (2) it allows you to weight the adjustments, so I just used the inverse of each series' historical average as a weight, which addressed the adjustment problem.
    $endgroup$
    – Stephan Kolassa
    Jan 30 at 16:53













    6












    $begingroup$

    We will only be able to give you very general advice.




    • Are there any strong drivers, like promotions or calendar events, or seasonality, trends or lifecycles? If so, include them in your models. For instance, you could regress sales on promotions, then potentially model residuals (using exponential smoothing or ARIMA).

    • There are software packages that do a reasonably good job at fitting multiple time series models to a series. You can then simply iterate over your 2000 series, which should not take much more runtime than a cup of coffee. I particularly recommend the ets() function in the forecast package in R. (Less so the auto.arima() function for weekly data.

    • At least skim a forecasting textbook, e.g., this one. It uses the forecast package I recommend above.

    • What is your final objective? Do you want an unbiased forecast? Then assess point forecasts using the MSE. Will your bonus depend on the MAPE? Then this list of the problems of the MAPE may be helpful. Do you need forecasts to set safety amounts? Then you need quantile forecasts, not mean predictions. (The functions in the forecast package can give you those.)


    If you have more specific questions, do post them at CV.






    share|cite|improve this answer









    $endgroup$









    • 4




      $begingroup$
      @ŁukaszGrad: if you have worked your way through FPP2, our book won't tell you much new. Ord et al.'s Principles of Business Forecasting (2nd ed.) goes into more depth (I reviewed it here if you have access). ...
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:47






    • 3




      $begingroup$
      ... You might profit from looking at the IIF, maybe read its publication Foresight or attend one of its conferences, either the ISF, which will take place this year in June in Thessaloniki, or the Foresight Practitioner Conference, this year in November at the SAS campus in Cary, NC, depending on where you are. The ISF is somewhat more academically oriented, but recently, I'd say about 33% of attendees came from industry, and there usually is a practitioner track.
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:49






    • 2




      $begingroup$
      (Full disclosure: I am involved with all of these, so take my recommendations with a large grain of salt. If you do attend one of the conferences, find me and say hi!)
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:50






    • 1




      $begingroup$
      @SkanderH: use the forecast() command on your fitted model (i.e., the output of ets() or auto.arima()), and specify the level parameter. See ?forecast.ets and ?forecast.Arima (note the capitalization).
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 21:18






    • 1




      $begingroup$
      @StephanKolassa I accepted the other answer, as it's a follow-up on your answer and people are therefore more inclined to read your helpful advice also.
      $endgroup$
      – Amonet
      Jan 30 at 7:44
















    6












    $begingroup$

    We will only be able to give you very general advice.




    • Are there any strong drivers, like promotions or calendar events, or seasonality, trends or lifecycles? If so, include them in your models. For instance, you could regress sales on promotions, then potentially model residuals (using exponential smoothing or ARIMA).

    • There are software packages that do a reasonably good job at fitting multiple time series models to a series. You can then simply iterate over your 2000 series, which should not take much more runtime than a cup of coffee. I particularly recommend the ets() function in the forecast package in R. (Less so the auto.arima() function for weekly data.

    • At least skim a forecasting textbook, e.g., this one. It uses the forecast package I recommend above.

    • What is your final objective? Do you want an unbiased forecast? Then assess point forecasts using the MSE. Will your bonus depend on the MAPE? Then this list of the problems of the MAPE may be helpful. Do you need forecasts to set safety amounts? Then you need quantile forecasts, not mean predictions. (The functions in the forecast package can give you those.)


    If you have more specific questions, do post them at CV.






    share|cite|improve this answer









    $endgroup$









    • 4




      $begingroup$
      @ŁukaszGrad: if you have worked your way through FPP2, our book won't tell you much new. Ord et al.'s Principles of Business Forecasting (2nd ed.) goes into more depth (I reviewed it here if you have access). ...
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:47






    • 3




      $begingroup$
      ... You might profit from looking at the IIF, maybe read its publication Foresight or attend one of its conferences, either the ISF, which will take place this year in June in Thessaloniki, or the Foresight Practitioner Conference, this year in November at the SAS campus in Cary, NC, depending on where you are. The ISF is somewhat more academically oriented, but recently, I'd say about 33% of attendees came from industry, and there usually is a practitioner track.
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:49






    • 2




      $begingroup$
      (Full disclosure: I am involved with all of these, so take my recommendations with a large grain of salt. If you do attend one of the conferences, find me and say hi!)
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:50






    • 1




      $begingroup$
      @SkanderH: use the forecast() command on your fitted model (i.e., the output of ets() or auto.arima()), and specify the level parameter. See ?forecast.ets and ?forecast.Arima (note the capitalization).
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 21:18






    • 1




      $begingroup$
      @StephanKolassa I accepted the other answer, as it's a follow-up on your answer and people are therefore more inclined to read your helpful advice also.
      $endgroup$
      – Amonet
      Jan 30 at 7:44














    6












    6








    6





    $begingroup$

    We will only be able to give you very general advice.




    • Are there any strong drivers, like promotions or calendar events, or seasonality, trends or lifecycles? If so, include them in your models. For instance, you could regress sales on promotions, then potentially model residuals (using exponential smoothing or ARIMA).

    • There are software packages that do a reasonably good job at fitting multiple time series models to a series. You can then simply iterate over your 2000 series, which should not take much more runtime than a cup of coffee. I particularly recommend the ets() function in the forecast package in R. (Less so the auto.arima() function for weekly data.

    • At least skim a forecasting textbook, e.g., this one. It uses the forecast package I recommend above.

    • What is your final objective? Do you want an unbiased forecast? Then assess point forecasts using the MSE. Will your bonus depend on the MAPE? Then this list of the problems of the MAPE may be helpful. Do you need forecasts to set safety amounts? Then you need quantile forecasts, not mean predictions. (The functions in the forecast package can give you those.)


    If you have more specific questions, do post them at CV.






    share|cite|improve this answer









    $endgroup$



    We will only be able to give you very general advice.




    • Are there any strong drivers, like promotions or calendar events, or seasonality, trends or lifecycles? If so, include them in your models. For instance, you could regress sales on promotions, then potentially model residuals (using exponential smoothing or ARIMA).

    • There are software packages that do a reasonably good job at fitting multiple time series models to a series. You can then simply iterate over your 2000 series, which should not take much more runtime than a cup of coffee. I particularly recommend the ets() function in the forecast package in R. (Less so the auto.arima() function for weekly data.

    • At least skim a forecasting textbook, e.g., this one. It uses the forecast package I recommend above.

    • What is your final objective? Do you want an unbiased forecast? Then assess point forecasts using the MSE. Will your bonus depend on the MAPE? Then this list of the problems of the MAPE may be helpful. Do you need forecasts to set safety amounts? Then you need quantile forecasts, not mean predictions. (The functions in the forecast package can give you those.)


    If you have more specific questions, do post them at CV.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Jan 26 at 16:27









    Stephan KolassaStephan Kolassa

    45k693163




    45k693163








    • 4




      $begingroup$
      @ŁukaszGrad: if you have worked your way through FPP2, our book won't tell you much new. Ord et al.'s Principles of Business Forecasting (2nd ed.) goes into more depth (I reviewed it here if you have access). ...
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:47






    • 3




      $begingroup$
      ... You might profit from looking at the IIF, maybe read its publication Foresight or attend one of its conferences, either the ISF, which will take place this year in June in Thessaloniki, or the Foresight Practitioner Conference, this year in November at the SAS campus in Cary, NC, depending on where you are. The ISF is somewhat more academically oriented, but recently, I'd say about 33% of attendees came from industry, and there usually is a practitioner track.
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:49






    • 2




      $begingroup$
      (Full disclosure: I am involved with all of these, so take my recommendations with a large grain of salt. If you do attend one of the conferences, find me and say hi!)
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:50






    • 1




      $begingroup$
      @SkanderH: use the forecast() command on your fitted model (i.e., the output of ets() or auto.arima()), and specify the level parameter. See ?forecast.ets and ?forecast.Arima (note the capitalization).
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 21:18






    • 1




      $begingroup$
      @StephanKolassa I accepted the other answer, as it's a follow-up on your answer and people are therefore more inclined to read your helpful advice also.
      $endgroup$
      – Amonet
      Jan 30 at 7:44














    • 4




      $begingroup$
      @ŁukaszGrad: if you have worked your way through FPP2, our book won't tell you much new. Ord et al.'s Principles of Business Forecasting (2nd ed.) goes into more depth (I reviewed it here if you have access). ...
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:47






    • 3




      $begingroup$
      ... You might profit from looking at the IIF, maybe read its publication Foresight or attend one of its conferences, either the ISF, which will take place this year in June in Thessaloniki, or the Foresight Practitioner Conference, this year in November at the SAS campus in Cary, NC, depending on where you are. The ISF is somewhat more academically oriented, but recently, I'd say about 33% of attendees came from industry, and there usually is a practitioner track.
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:49






    • 2




      $begingroup$
      (Full disclosure: I am involved with all of these, so take my recommendations with a large grain of salt. If you do attend one of the conferences, find me and say hi!)
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 16:50






    • 1




      $begingroup$
      @SkanderH: use the forecast() command on your fitted model (i.e., the output of ets() or auto.arima()), and specify the level parameter. See ?forecast.ets and ?forecast.Arima (note the capitalization).
      $endgroup$
      – Stephan Kolassa
      Jan 26 at 21:18






    • 1




      $begingroup$
      @StephanKolassa I accepted the other answer, as it's a follow-up on your answer and people are therefore more inclined to read your helpful advice also.
      $endgroup$
      – Amonet
      Jan 30 at 7:44








    4




    4




    $begingroup$
    @ŁukaszGrad: if you have worked your way through FPP2, our book won't tell you much new. Ord et al.'s Principles of Business Forecasting (2nd ed.) goes into more depth (I reviewed it here if you have access). ...
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 16:47




    $begingroup$
    @ŁukaszGrad: if you have worked your way through FPP2, our book won't tell you much new. Ord et al.'s Principles of Business Forecasting (2nd ed.) goes into more depth (I reviewed it here if you have access). ...
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 16:47




    3




    3




    $begingroup$
    ... You might profit from looking at the IIF, maybe read its publication Foresight or attend one of its conferences, either the ISF, which will take place this year in June in Thessaloniki, or the Foresight Practitioner Conference, this year in November at the SAS campus in Cary, NC, depending on where you are. The ISF is somewhat more academically oriented, but recently, I'd say about 33% of attendees came from industry, and there usually is a practitioner track.
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 16:49




    $begingroup$
    ... You might profit from looking at the IIF, maybe read its publication Foresight or attend one of its conferences, either the ISF, which will take place this year in June in Thessaloniki, or the Foresight Practitioner Conference, this year in November at the SAS campus in Cary, NC, depending on where you are. The ISF is somewhat more academically oriented, but recently, I'd say about 33% of attendees came from industry, and there usually is a practitioner track.
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 16:49




    2




    2




    $begingroup$
    (Full disclosure: I am involved with all of these, so take my recommendations with a large grain of salt. If you do attend one of the conferences, find me and say hi!)
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 16:50




    $begingroup$
    (Full disclosure: I am involved with all of these, so take my recommendations with a large grain of salt. If you do attend one of the conferences, find me and say hi!)
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 16:50




    1




    1




    $begingroup$
    @SkanderH: use the forecast() command on your fitted model (i.e., the output of ets() or auto.arima()), and specify the level parameter. See ?forecast.ets and ?forecast.Arima (note the capitalization).
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 21:18




    $begingroup$
    @SkanderH: use the forecast() command on your fitted model (i.e., the output of ets() or auto.arima()), and specify the level parameter. See ?forecast.ets and ?forecast.Arima (note the capitalization).
    $endgroup$
    – Stephan Kolassa
    Jan 26 at 21:18




    1




    1




    $begingroup$
    @StephanKolassa I accepted the other answer, as it's a follow-up on your answer and people are therefore more inclined to read your helpful advice also.
    $endgroup$
    – Amonet
    Jan 30 at 7:44




    $begingroup$
    @StephanKolassa I accepted the other answer, as it's a follow-up on your answer and people are therefore more inclined to read your helpful advice also.
    $endgroup$
    – Amonet
    Jan 30 at 7:44











    0












    $begingroup$

    Segmenting based on the variance of the original series makes no sense to me as the best model should be invariant to scale. Consider a series ..model it and then multiply each value in the time series by 1000 .



    In terms of mass producing equations that may have both deterministic structure (pulses/level shift/local time trends ) OR either auto-regressive seasonality and arima structure you have to run a computer-based script . Beware of simple auto arima solutions that assume no deterministic structure OR fixed assumptions about same.






    share|cite|improve this answer









    $endgroup$


















      0












      $begingroup$

      Segmenting based on the variance of the original series makes no sense to me as the best model should be invariant to scale. Consider a series ..model it and then multiply each value in the time series by 1000 .



      In terms of mass producing equations that may have both deterministic structure (pulses/level shift/local time trends ) OR either auto-regressive seasonality and arima structure you have to run a computer-based script . Beware of simple auto arima solutions that assume no deterministic structure OR fixed assumptions about same.






      share|cite|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        Segmenting based on the variance of the original series makes no sense to me as the best model should be invariant to scale. Consider a series ..model it and then multiply each value in the time series by 1000 .



        In terms of mass producing equations that may have both deterministic structure (pulses/level shift/local time trends ) OR either auto-regressive seasonality and arima structure you have to run a computer-based script . Beware of simple auto arima solutions that assume no deterministic structure OR fixed assumptions about same.






        share|cite|improve this answer









        $endgroup$



        Segmenting based on the variance of the original series makes no sense to me as the best model should be invariant to scale. Consider a series ..model it and then multiply each value in the time series by 1000 .



        In terms of mass producing equations that may have both deterministic structure (pulses/level shift/local time trends ) OR either auto-regressive seasonality and arima structure you have to run a computer-based script . Beware of simple auto arima solutions that assume no deterministic structure OR fixed assumptions about same.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Jan 26 at 16:23









        IrishStatIrishStat

        20.7k42141




        20.7k42141






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f389291%2fstrategies-for-time-series-forecasting-for-2000-different-products%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

            ComboBox Display Member on multiple fields

            Is it possible to collect Nectar points via Trainline?