How to efficiently randomly select a subset of data from an h5py dataset












0














I have a very very big dataset in h5py and this leads to memory problem when loaded in full and subsequent processing. I need to randomly select a subset and work with it. This is doing "boosting" in the context in machine learning.



dataset = h5py.File(h5_file, 'r')

train_set_x_all = dataset['train_set_x'][:]
train_set_y_all = dataset['train_set_y'][:]

dataset.close()

p = np.random.permutation(len(train_set_x_all))[:2000] # rand select 2000
train_set_x = train_set_x_all[p]
train_set_y = train_set_y_all[p]


I still somehow need to get the full set and slice it with index array p. This works for me as subsequently training only worked on the smaller set. But I wonder if there's still a better way to let me do this without even keeping the full dataset in memory at all.










share|improve this question
























  • arr = dataset['name'][:2000] loads a slice efficiently. arr = dataset['name'][p] also works but is slower. And p has to be sorted. docs.h5py.org/en/latest/high/dataset.html#fancy-indexing
    – hpaulj
    Nov 18 '18 at 22:12










  • Depending on the selection range, it may be faster to load a slice (range), and pick randomly from that. Also selection from an array in memory won't be constrained by the sorted requirement. You may just have to try various alternatives and see which best suits your needs.
    – hpaulj
    Nov 18 '18 at 22:26












  • @hpaulj: 1st option is not a random selection. 2nd will error out since p isn't a boolean (i have already tried that). the page link talks about using boolean as fancy index, but it doesn't quite do what i want. It acts like a global mask and spit out a 1-dim array.
    – kawingkelvin
    Nov 19 '18 at 0:17










  • I tried p = np.sort(p) and then train_set_x = dataset['train_set_x'][p, ...] and this works. But it is very very very slow, and i would rather have a bit more memory load than this dramatic slow down. is there something i am doing wrong. Random indexing or fancy indexing doesn't appear efficient at all.
    – kawingkelvin
    Nov 19 '18 at 0:33










  • The documentation warns us that this sort of indexing is slow. With an array in memory, access to any point in the databuffer takes about the same time. But the h5 array is on a file, which has serial access (or at least buffered). So selecting an item near the start of the dataset, another in the middle, and another near the end can require big jumps in the file access. Requiring sorted indices at least eliminates back-n-forth seeks.
    – hpaulj
    Nov 19 '18 at 0:40


















0














I have a very very big dataset in h5py and this leads to memory problem when loaded in full and subsequent processing. I need to randomly select a subset and work with it. This is doing "boosting" in the context in machine learning.



dataset = h5py.File(h5_file, 'r')

train_set_x_all = dataset['train_set_x'][:]
train_set_y_all = dataset['train_set_y'][:]

dataset.close()

p = np.random.permutation(len(train_set_x_all))[:2000] # rand select 2000
train_set_x = train_set_x_all[p]
train_set_y = train_set_y_all[p]


I still somehow need to get the full set and slice it with index array p. This works for me as subsequently training only worked on the smaller set. But I wonder if there's still a better way to let me do this without even keeping the full dataset in memory at all.










share|improve this question
























  • arr = dataset['name'][:2000] loads a slice efficiently. arr = dataset['name'][p] also works but is slower. And p has to be sorted. docs.h5py.org/en/latest/high/dataset.html#fancy-indexing
    – hpaulj
    Nov 18 '18 at 22:12










  • Depending on the selection range, it may be faster to load a slice (range), and pick randomly from that. Also selection from an array in memory won't be constrained by the sorted requirement. You may just have to try various alternatives and see which best suits your needs.
    – hpaulj
    Nov 18 '18 at 22:26












  • @hpaulj: 1st option is not a random selection. 2nd will error out since p isn't a boolean (i have already tried that). the page link talks about using boolean as fancy index, but it doesn't quite do what i want. It acts like a global mask and spit out a 1-dim array.
    – kawingkelvin
    Nov 19 '18 at 0:17










  • I tried p = np.sort(p) and then train_set_x = dataset['train_set_x'][p, ...] and this works. But it is very very very slow, and i would rather have a bit more memory load than this dramatic slow down. is there something i am doing wrong. Random indexing or fancy indexing doesn't appear efficient at all.
    – kawingkelvin
    Nov 19 '18 at 0:33










  • The documentation warns us that this sort of indexing is slow. With an array in memory, access to any point in the databuffer takes about the same time. But the h5 array is on a file, which has serial access (or at least buffered). So selecting an item near the start of the dataset, another in the middle, and another near the end can require big jumps in the file access. Requiring sorted indices at least eliminates back-n-forth seeks.
    – hpaulj
    Nov 19 '18 at 0:40
















0












0








0







I have a very very big dataset in h5py and this leads to memory problem when loaded in full and subsequent processing. I need to randomly select a subset and work with it. This is doing "boosting" in the context in machine learning.



dataset = h5py.File(h5_file, 'r')

train_set_x_all = dataset['train_set_x'][:]
train_set_y_all = dataset['train_set_y'][:]

dataset.close()

p = np.random.permutation(len(train_set_x_all))[:2000] # rand select 2000
train_set_x = train_set_x_all[p]
train_set_y = train_set_y_all[p]


I still somehow need to get the full set and slice it with index array p. This works for me as subsequently training only worked on the smaller set. But I wonder if there's still a better way to let me do this without even keeping the full dataset in memory at all.










share|improve this question















I have a very very big dataset in h5py and this leads to memory problem when loaded in full and subsequent processing. I need to randomly select a subset and work with it. This is doing "boosting" in the context in machine learning.



dataset = h5py.File(h5_file, 'r')

train_set_x_all = dataset['train_set_x'][:]
train_set_y_all = dataset['train_set_y'][:]

dataset.close()

p = np.random.permutation(len(train_set_x_all))[:2000] # rand select 2000
train_set_x = train_set_x_all[p]
train_set_y = train_set_y_all[p]


I still somehow need to get the full set and slice it with index array p. This works for me as subsequently training only worked on the smaller set. But I wonder if there's still a better way to let me do this without even keeping the full dataset in memory at all.







python numpy h5py






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 18 '18 at 22:41









hpaulj

110k775141




110k775141










asked Nov 18 '18 at 22:02









kawingkelvinkawingkelvin

448416




448416












  • arr = dataset['name'][:2000] loads a slice efficiently. arr = dataset['name'][p] also works but is slower. And p has to be sorted. docs.h5py.org/en/latest/high/dataset.html#fancy-indexing
    – hpaulj
    Nov 18 '18 at 22:12










  • Depending on the selection range, it may be faster to load a slice (range), and pick randomly from that. Also selection from an array in memory won't be constrained by the sorted requirement. You may just have to try various alternatives and see which best suits your needs.
    – hpaulj
    Nov 18 '18 at 22:26












  • @hpaulj: 1st option is not a random selection. 2nd will error out since p isn't a boolean (i have already tried that). the page link talks about using boolean as fancy index, but it doesn't quite do what i want. It acts like a global mask and spit out a 1-dim array.
    – kawingkelvin
    Nov 19 '18 at 0:17










  • I tried p = np.sort(p) and then train_set_x = dataset['train_set_x'][p, ...] and this works. But it is very very very slow, and i would rather have a bit more memory load than this dramatic slow down. is there something i am doing wrong. Random indexing or fancy indexing doesn't appear efficient at all.
    – kawingkelvin
    Nov 19 '18 at 0:33










  • The documentation warns us that this sort of indexing is slow. With an array in memory, access to any point in the databuffer takes about the same time. But the h5 array is on a file, which has serial access (or at least buffered). So selecting an item near the start of the dataset, another in the middle, and another near the end can require big jumps in the file access. Requiring sorted indices at least eliminates back-n-forth seeks.
    – hpaulj
    Nov 19 '18 at 0:40




















  • arr = dataset['name'][:2000] loads a slice efficiently. arr = dataset['name'][p] also works but is slower. And p has to be sorted. docs.h5py.org/en/latest/high/dataset.html#fancy-indexing
    – hpaulj
    Nov 18 '18 at 22:12










  • Depending on the selection range, it may be faster to load a slice (range), and pick randomly from that. Also selection from an array in memory won't be constrained by the sorted requirement. You may just have to try various alternatives and see which best suits your needs.
    – hpaulj
    Nov 18 '18 at 22:26












  • @hpaulj: 1st option is not a random selection. 2nd will error out since p isn't a boolean (i have already tried that). the page link talks about using boolean as fancy index, but it doesn't quite do what i want. It acts like a global mask and spit out a 1-dim array.
    – kawingkelvin
    Nov 19 '18 at 0:17










  • I tried p = np.sort(p) and then train_set_x = dataset['train_set_x'][p, ...] and this works. But it is very very very slow, and i would rather have a bit more memory load than this dramatic slow down. is there something i am doing wrong. Random indexing or fancy indexing doesn't appear efficient at all.
    – kawingkelvin
    Nov 19 '18 at 0:33










  • The documentation warns us that this sort of indexing is slow. With an array in memory, access to any point in the databuffer takes about the same time. But the h5 array is on a file, which has serial access (or at least buffered). So selecting an item near the start of the dataset, another in the middle, and another near the end can require big jumps in the file access. Requiring sorted indices at least eliminates back-n-forth seeks.
    – hpaulj
    Nov 19 '18 at 0:40


















arr = dataset['name'][:2000] loads a slice efficiently. arr = dataset['name'][p] also works but is slower. And p has to be sorted. docs.h5py.org/en/latest/high/dataset.html#fancy-indexing
– hpaulj
Nov 18 '18 at 22:12




arr = dataset['name'][:2000] loads a slice efficiently. arr = dataset['name'][p] also works but is slower. And p has to be sorted. docs.h5py.org/en/latest/high/dataset.html#fancy-indexing
– hpaulj
Nov 18 '18 at 22:12












Depending on the selection range, it may be faster to load a slice (range), and pick randomly from that. Also selection from an array in memory won't be constrained by the sorted requirement. You may just have to try various alternatives and see which best suits your needs.
– hpaulj
Nov 18 '18 at 22:26






Depending on the selection range, it may be faster to load a slice (range), and pick randomly from that. Also selection from an array in memory won't be constrained by the sorted requirement. You may just have to try various alternatives and see which best suits your needs.
– hpaulj
Nov 18 '18 at 22:26














@hpaulj: 1st option is not a random selection. 2nd will error out since p isn't a boolean (i have already tried that). the page link talks about using boolean as fancy index, but it doesn't quite do what i want. It acts like a global mask and spit out a 1-dim array.
– kawingkelvin
Nov 19 '18 at 0:17




@hpaulj: 1st option is not a random selection. 2nd will error out since p isn't a boolean (i have already tried that). the page link talks about using boolean as fancy index, but it doesn't quite do what i want. It acts like a global mask and spit out a 1-dim array.
– kawingkelvin
Nov 19 '18 at 0:17












I tried p = np.sort(p) and then train_set_x = dataset['train_set_x'][p, ...] and this works. But it is very very very slow, and i would rather have a bit more memory load than this dramatic slow down. is there something i am doing wrong. Random indexing or fancy indexing doesn't appear efficient at all.
– kawingkelvin
Nov 19 '18 at 0:33




I tried p = np.sort(p) and then train_set_x = dataset['train_set_x'][p, ...] and this works. But it is very very very slow, and i would rather have a bit more memory load than this dramatic slow down. is there something i am doing wrong. Random indexing or fancy indexing doesn't appear efficient at all.
– kawingkelvin
Nov 19 '18 at 0:33












The documentation warns us that this sort of indexing is slow. With an array in memory, access to any point in the databuffer takes about the same time. But the h5 array is on a file, which has serial access (or at least buffered). So selecting an item near the start of the dataset, another in the middle, and another near the end can require big jumps in the file access. Requiring sorted indices at least eliminates back-n-forth seeks.
– hpaulj
Nov 19 '18 at 0:40






The documentation warns us that this sort of indexing is slow. With an array in memory, access to any point in the databuffer takes about the same time. But the h5 array is on a file, which has serial access (or at least buffered). So selecting an item near the start of the dataset, another in the middle, and another near the end can require big jumps in the file access. Requiring sorted indices at least eliminates back-n-forth seeks.
– hpaulj
Nov 19 '18 at 0:40














0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53365908%2fhow-to-efficiently-randomly-select-a-subset-of-data-from-an-h5py-dataset%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53365908%2fhow-to-efficiently-randomly-select-a-subset-of-data-from-an-h5py-dataset%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

ComboBox Display Member on multiple fields

Is it possible to collect Nectar points via Trainline?