how to decompress and read a file containing multiple compressed file in spark
I have a file AA.zip which again contains multiple files for ex aa.tar.gz, bb.tar.gz , etc
I need to read this files in spark scala , how can i achieve that??
the only problem here is to extract the contents of zip file.
scala apache-spark bigdata
add a comment |
I have a file AA.zip which again contains multiple files for ex aa.tar.gz, bb.tar.gz , etc
I need to read this files in spark scala , how can i achieve that??
the only problem here is to extract the contents of zip file.
scala apache-spark bigdata
Possible duplicate of Read whole text files from a compression in Spark
– user10465355
Nov 20 '18 at 11:35
no, that question is about a directory containing compressed files but here i have a file which is in zip format and again contains files with .tar.gz format.
– sheetal kaur
Nov 20 '18 at 11:37
add a comment |
I have a file AA.zip which again contains multiple files for ex aa.tar.gz, bb.tar.gz , etc
I need to read this files in spark scala , how can i achieve that??
the only problem here is to extract the contents of zip file.
scala apache-spark bigdata
I have a file AA.zip which again contains multiple files for ex aa.tar.gz, bb.tar.gz , etc
I need to read this files in spark scala , how can i achieve that??
the only problem here is to extract the contents of zip file.
scala apache-spark bigdata
scala apache-spark bigdata
edited Nov 20 '18 at 11:40
sheetal kaur
asked Nov 20 '18 at 8:46
sheetal kaursheetal kaur
13
13
Possible duplicate of Read whole text files from a compression in Spark
– user10465355
Nov 20 '18 at 11:35
no, that question is about a directory containing compressed files but here i have a file which is in zip format and again contains files with .tar.gz format.
– sheetal kaur
Nov 20 '18 at 11:37
add a comment |
Possible duplicate of Read whole text files from a compression in Spark
– user10465355
Nov 20 '18 at 11:35
no, that question is about a directory containing compressed files but here i have a file which is in zip format and again contains files with .tar.gz format.
– sheetal kaur
Nov 20 '18 at 11:37
Possible duplicate of Read whole text files from a compression in Spark
– user10465355
Nov 20 '18 at 11:35
Possible duplicate of Read whole text files from a compression in Spark
– user10465355
Nov 20 '18 at 11:35
no, that question is about a directory containing compressed files but here i have a file which is in zip format and again contains files with .tar.gz format.
– sheetal kaur
Nov 20 '18 at 11:37
no, that question is about a directory containing compressed files but here i have a file which is in zip format and again contains files with .tar.gz format.
– sheetal kaur
Nov 20 '18 at 11:37
add a comment |
1 Answer
1
active
oldest
votes
so ZIPs on HDFS are going to be a bit tricky because they don't split well so you'll have to process 1 or more zip file per executor. This is also one of the few cases were you probably have to fall back to SparkContext
because for some reason binary file support in Spark is not that good.
https://spark.apache.org/docs/2.4.0/api/scala/index.html#org.apache.spark.SparkContext
there's a readBinaryFiles
there which gives you access to the zip binary data which you can then utilize with the usual ZIP-handling from java or scala.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53389186%2fhow-to-decompress-and-read-a-file-containing-multiple-compressed-file-in-spark%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
so ZIPs on HDFS are going to be a bit tricky because they don't split well so you'll have to process 1 or more zip file per executor. This is also one of the few cases were you probably have to fall back to SparkContext
because for some reason binary file support in Spark is not that good.
https://spark.apache.org/docs/2.4.0/api/scala/index.html#org.apache.spark.SparkContext
there's a readBinaryFiles
there which gives you access to the zip binary data which you can then utilize with the usual ZIP-handling from java or scala.
add a comment |
so ZIPs on HDFS are going to be a bit tricky because they don't split well so you'll have to process 1 or more zip file per executor. This is also one of the few cases were you probably have to fall back to SparkContext
because for some reason binary file support in Spark is not that good.
https://spark.apache.org/docs/2.4.0/api/scala/index.html#org.apache.spark.SparkContext
there's a readBinaryFiles
there which gives you access to the zip binary data which you can then utilize with the usual ZIP-handling from java or scala.
add a comment |
so ZIPs on HDFS are going to be a bit tricky because they don't split well so you'll have to process 1 or more zip file per executor. This is also one of the few cases were you probably have to fall back to SparkContext
because for some reason binary file support in Spark is not that good.
https://spark.apache.org/docs/2.4.0/api/scala/index.html#org.apache.spark.SparkContext
there's a readBinaryFiles
there which gives you access to the zip binary data which you can then utilize with the usual ZIP-handling from java or scala.
so ZIPs on HDFS are going to be a bit tricky because they don't split well so you'll have to process 1 or more zip file per executor. This is also one of the few cases were you probably have to fall back to SparkContext
because for some reason binary file support in Spark is not that good.
https://spark.apache.org/docs/2.4.0/api/scala/index.html#org.apache.spark.SparkContext
there's a readBinaryFiles
there which gives you access to the zip binary data which you can then utilize with the usual ZIP-handling from java or scala.
answered Nov 20 '18 at 9:12
Dominic EggerDominic Egger
87817
87817
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53389186%2fhow-to-decompress-and-read-a-file-containing-multiple-compressed-file-in-spark%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Possible duplicate of Read whole text files from a compression in Spark
– user10465355
Nov 20 '18 at 11:35
no, that question is about a directory containing compressed files but here i have a file which is in zip format and again contains files with .tar.gz format.
– sheetal kaur
Nov 20 '18 at 11:37