XGBoost on Spark crashes with SIGSEV





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







0















I use Scala in Azure Databricks with the following setup:




  • 5x worker node (28.0 GB Memory, 8 Cores, 1.5 DBU)

  • 1x driver (14.0 GB Memory, 4 Cores, 0.75 DBU)


I have a Spark Dataframe with 760k rows with two columns:




  1. label (Double)

  2. features (SparseVector of length 84224 each)


I want to use XGBoost on my Dataframe, to train the regression model:



val params = Map(
"objective" -> "reg:linear",
"max_depth" -> 6,
"eval_metric" -> "rmse"
)
var model = new XGBoostRegressor(params)
.setFeaturesCol("features")
.setLabelCol("label")
.setTreeMethod("approx")
.setNumRound(20)
.setNumEarlyStoppingRounds(3)
.setUseExternalMemory(true)
.setMaxDepth(6)
.setNumWorkers(10)

val trainedModel = model.fit(trainSample)


After launching, I get the following error:




SIGSEGV (0xb) at pc=0x00007f62a9d33e0e, pid=3954,
tid=0x00007f62c88db700




What I've tried so far:



When I set numWorkers to 1, the training starts, but obviously runs really slow, which I believe is no the way it should be used.



The documentation here: https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html and here: https://docs.databricks.com/spark/latest/mllib/third-party-libraries.html#xgboost does not help at all with my case.



My questions are:




  1. Is it possible to run XGBoost on Dataset that is bigger than memory of each individual worker? (I assume that it's YES, but correct me if I'm wrong)

  2. How to use External Memory properly, so that when I take even bigger dataset XGBoost will do the training?

  3. Is partitioning of the input Dataframe impacting the training somehow?










share|improve this question























  • Do you have access to the console of the driver??

    – EmiCareOfCell44
    Nov 22 '18 at 11:09













  • I am able to access stdout, stderr and log4j output of the driver

    – Marcin Zablocki
    Nov 22 '18 at 13:36


















0















I use Scala in Azure Databricks with the following setup:




  • 5x worker node (28.0 GB Memory, 8 Cores, 1.5 DBU)

  • 1x driver (14.0 GB Memory, 4 Cores, 0.75 DBU)


I have a Spark Dataframe with 760k rows with two columns:




  1. label (Double)

  2. features (SparseVector of length 84224 each)


I want to use XGBoost on my Dataframe, to train the regression model:



val params = Map(
"objective" -> "reg:linear",
"max_depth" -> 6,
"eval_metric" -> "rmse"
)
var model = new XGBoostRegressor(params)
.setFeaturesCol("features")
.setLabelCol("label")
.setTreeMethod("approx")
.setNumRound(20)
.setNumEarlyStoppingRounds(3)
.setUseExternalMemory(true)
.setMaxDepth(6)
.setNumWorkers(10)

val trainedModel = model.fit(trainSample)


After launching, I get the following error:




SIGSEGV (0xb) at pc=0x00007f62a9d33e0e, pid=3954,
tid=0x00007f62c88db700




What I've tried so far:



When I set numWorkers to 1, the training starts, but obviously runs really slow, which I believe is no the way it should be used.



The documentation here: https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html and here: https://docs.databricks.com/spark/latest/mllib/third-party-libraries.html#xgboost does not help at all with my case.



My questions are:




  1. Is it possible to run XGBoost on Dataset that is bigger than memory of each individual worker? (I assume that it's YES, but correct me if I'm wrong)

  2. How to use External Memory properly, so that when I take even bigger dataset XGBoost will do the training?

  3. Is partitioning of the input Dataframe impacting the training somehow?










share|improve this question























  • Do you have access to the console of the driver??

    – EmiCareOfCell44
    Nov 22 '18 at 11:09













  • I am able to access stdout, stderr and log4j output of the driver

    – Marcin Zablocki
    Nov 22 '18 at 13:36














0












0








0








I use Scala in Azure Databricks with the following setup:




  • 5x worker node (28.0 GB Memory, 8 Cores, 1.5 DBU)

  • 1x driver (14.0 GB Memory, 4 Cores, 0.75 DBU)


I have a Spark Dataframe with 760k rows with two columns:




  1. label (Double)

  2. features (SparseVector of length 84224 each)


I want to use XGBoost on my Dataframe, to train the regression model:



val params = Map(
"objective" -> "reg:linear",
"max_depth" -> 6,
"eval_metric" -> "rmse"
)
var model = new XGBoostRegressor(params)
.setFeaturesCol("features")
.setLabelCol("label")
.setTreeMethod("approx")
.setNumRound(20)
.setNumEarlyStoppingRounds(3)
.setUseExternalMemory(true)
.setMaxDepth(6)
.setNumWorkers(10)

val trainedModel = model.fit(trainSample)


After launching, I get the following error:




SIGSEGV (0xb) at pc=0x00007f62a9d33e0e, pid=3954,
tid=0x00007f62c88db700




What I've tried so far:



When I set numWorkers to 1, the training starts, but obviously runs really slow, which I believe is no the way it should be used.



The documentation here: https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html and here: https://docs.databricks.com/spark/latest/mllib/third-party-libraries.html#xgboost does not help at all with my case.



My questions are:




  1. Is it possible to run XGBoost on Dataset that is bigger than memory of each individual worker? (I assume that it's YES, but correct me if I'm wrong)

  2. How to use External Memory properly, so that when I take even bigger dataset XGBoost will do the training?

  3. Is partitioning of the input Dataframe impacting the training somehow?










share|improve this question














I use Scala in Azure Databricks with the following setup:




  • 5x worker node (28.0 GB Memory, 8 Cores, 1.5 DBU)

  • 1x driver (14.0 GB Memory, 4 Cores, 0.75 DBU)


I have a Spark Dataframe with 760k rows with two columns:




  1. label (Double)

  2. features (SparseVector of length 84224 each)


I want to use XGBoost on my Dataframe, to train the regression model:



val params = Map(
"objective" -> "reg:linear",
"max_depth" -> 6,
"eval_metric" -> "rmse"
)
var model = new XGBoostRegressor(params)
.setFeaturesCol("features")
.setLabelCol("label")
.setTreeMethod("approx")
.setNumRound(20)
.setNumEarlyStoppingRounds(3)
.setUseExternalMemory(true)
.setMaxDepth(6)
.setNumWorkers(10)

val trainedModel = model.fit(trainSample)


After launching, I get the following error:




SIGSEGV (0xb) at pc=0x00007f62a9d33e0e, pid=3954,
tid=0x00007f62c88db700




What I've tried so far:



When I set numWorkers to 1, the training starts, but obviously runs really slow, which I believe is no the way it should be used.



The documentation here: https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html and here: https://docs.databricks.com/spark/latest/mllib/third-party-libraries.html#xgboost does not help at all with my case.



My questions are:




  1. Is it possible to run XGBoost on Dataset that is bigger than memory of each individual worker? (I assume that it's YES, but correct me if I'm wrong)

  2. How to use External Memory properly, so that when I take even bigger dataset XGBoost will do the training?

  3. Is partitioning of the input Dataframe impacting the training somehow?







scala apache-spark xgboost databricks azure-databricks






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 22 '18 at 11:01









Marcin ZablockiMarcin Zablocki

5,23612535




5,23612535













  • Do you have access to the console of the driver??

    – EmiCareOfCell44
    Nov 22 '18 at 11:09













  • I am able to access stdout, stderr and log4j output of the driver

    – Marcin Zablocki
    Nov 22 '18 at 13:36



















  • Do you have access to the console of the driver??

    – EmiCareOfCell44
    Nov 22 '18 at 11:09













  • I am able to access stdout, stderr and log4j output of the driver

    – Marcin Zablocki
    Nov 22 '18 at 13:36

















Do you have access to the console of the driver??

– EmiCareOfCell44
Nov 22 '18 at 11:09







Do you have access to the console of the driver??

– EmiCareOfCell44
Nov 22 '18 at 11:09















I am able to access stdout, stderr and log4j output of the driver

– Marcin Zablocki
Nov 22 '18 at 13:36





I am able to access stdout, stderr and log4j output of the driver

– Marcin Zablocki
Nov 22 '18 at 13:36












0






active

oldest

votes












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53429496%2fxgboost-on-spark-crashes-with-sigsev%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53429496%2fxgboost-on-spark-crashes-with-sigsev%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

ComboBox Display Member on multiple fields

Is it possible to collect Nectar points via Trainline?