Is there a way to collect a map using “groupingBy” for MULTIPLE elements within a nested structure?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







5















First, a bit of context code:



import java.util.*;
import java.util.concurrent.atomic.DoubleAdder;
import java.util.function.Function;
import java.util.stream.Collectors;

class Scratch {

static enum Id {A, B, C}
static class IdWrapper {
private final Id id;
public IdWrapper(Id id) {this.id = id;}
Id getId() { return id; }
}

public static void main(String args) {
Map<String, Object> v1 = new HashMap<>();
v1.put("parents", new HashSet<>(Arrays.asList(new IdWrapper(Id.A), new IdWrapper(Id.B))));
v1.put("size", 1d);

Map<String, Object> v2 = new HashMap<>();
v2.put("parents", new HashSet<>(Arrays.asList(new IdWrapper(Id.B), new IdWrapper(Id.C))));
v2.put("size", 2d);

Map<String, Map<String, Object>> allVs = new HashMap<>();
allVs.put("v1", v1);
allVs.put("v2", v2);


The above represents the data structure I am dealing with. I have an outer map (key type is irrelevant), that contains inner "property maps" as values. These inner maps use strings to lookup different kind of data.



In the case I am working on, each v1, v2,... represents a "disk". Each disk has a specific size, but can have multiple parents.



Now I need to sum up the sizes per parent Id as Map<Id, Double>.
For the above example, that map would be {B=3.0, A=1.0, C=2.0}.



The following code gives the expected result:



    HashMap<Id, DoubleAdder> adders = new HashMap<>();
allVs.values().forEach(m -> {
double size = (Double) m.get("size");
Set<IdWrapper> wrappedIds = (Set<IdWrapper>) m.get("parents");
wrappedIds.forEach(w -> adders.computeIfAbsent(w.getId(), a -> new DoubleAdder()).add(size));
});

System.out.println(adders.keySet().stream()
.collect(Collectors.toMap(Function.identity(), key -> adders.get(key).doubleValue())));


But the code feels pretty clunky (like the fact that I need a second map for adding up the sizes).



I have a similar case, where there is always exactly one parent, and that can easily be solved using



collect(Collectors.groupingBy(...), Collectors.summingDouble(...);


But I am lost for the "multiple" parents case.



So, question: can the above transformation to compute the required Map<Id, Double> be rewritten using groupingBy()?



And just for the record: the above is just a mcve for the problem I need an answer for. I understand that the "data layout" might look strange. In reality, we actually have distinct classes representing these "disks" for example. But our "framework" also allows for accessing the properties of any object within the database using such IDs and property names. And sometimes, when you have performance issues, then fetching data in such a "raw property map" way is orders of magnitude faster compared to accessing the true "disk" objects themselves. In other words: I can't change anything about the context. My question is solely about rewriting that computation.



( I am constrained to Java8 and "standard" Java libraries, but additional answers for newer versions Java or nice non-standard ways of solving this will be appreciated, too )










share|improve this question

























  • In the case I am working on, each v1, v2,... represents a "disk" then create a Disk class with two typed properties instead of using a Map.

    – JB Nizet
    Nov 22 '18 at 8:43











  • @JBNizet See my updates. The above is just an example, in reality, things are more complicated. And in reality, there is a good reason to use the above map approach (50 ms for a full lookup compared to 3 seconds ... and that is a small configuration)

    – GhostCat
    Nov 22 '18 at 9:07


















5















First, a bit of context code:



import java.util.*;
import java.util.concurrent.atomic.DoubleAdder;
import java.util.function.Function;
import java.util.stream.Collectors;

class Scratch {

static enum Id {A, B, C}
static class IdWrapper {
private final Id id;
public IdWrapper(Id id) {this.id = id;}
Id getId() { return id; }
}

public static void main(String args) {
Map<String, Object> v1 = new HashMap<>();
v1.put("parents", new HashSet<>(Arrays.asList(new IdWrapper(Id.A), new IdWrapper(Id.B))));
v1.put("size", 1d);

Map<String, Object> v2 = new HashMap<>();
v2.put("parents", new HashSet<>(Arrays.asList(new IdWrapper(Id.B), new IdWrapper(Id.C))));
v2.put("size", 2d);

Map<String, Map<String, Object>> allVs = new HashMap<>();
allVs.put("v1", v1);
allVs.put("v2", v2);


The above represents the data structure I am dealing with. I have an outer map (key type is irrelevant), that contains inner "property maps" as values. These inner maps use strings to lookup different kind of data.



In the case I am working on, each v1, v2,... represents a "disk". Each disk has a specific size, but can have multiple parents.



Now I need to sum up the sizes per parent Id as Map<Id, Double>.
For the above example, that map would be {B=3.0, A=1.0, C=2.0}.



The following code gives the expected result:



    HashMap<Id, DoubleAdder> adders = new HashMap<>();
allVs.values().forEach(m -> {
double size = (Double) m.get("size");
Set<IdWrapper> wrappedIds = (Set<IdWrapper>) m.get("parents");
wrappedIds.forEach(w -> adders.computeIfAbsent(w.getId(), a -> new DoubleAdder()).add(size));
});

System.out.println(adders.keySet().stream()
.collect(Collectors.toMap(Function.identity(), key -> adders.get(key).doubleValue())));


But the code feels pretty clunky (like the fact that I need a second map for adding up the sizes).



I have a similar case, where there is always exactly one parent, and that can easily be solved using



collect(Collectors.groupingBy(...), Collectors.summingDouble(...);


But I am lost for the "multiple" parents case.



So, question: can the above transformation to compute the required Map<Id, Double> be rewritten using groupingBy()?



And just for the record: the above is just a mcve for the problem I need an answer for. I understand that the "data layout" might look strange. In reality, we actually have distinct classes representing these "disks" for example. But our "framework" also allows for accessing the properties of any object within the database using such IDs and property names. And sometimes, when you have performance issues, then fetching data in such a "raw property map" way is orders of magnitude faster compared to accessing the true "disk" objects themselves. In other words: I can't change anything about the context. My question is solely about rewriting that computation.



( I am constrained to Java8 and "standard" Java libraries, but additional answers for newer versions Java or nice non-standard ways of solving this will be appreciated, too )










share|improve this question

























  • In the case I am working on, each v1, v2,... represents a "disk" then create a Disk class with two typed properties instead of using a Map.

    – JB Nizet
    Nov 22 '18 at 8:43











  • @JBNizet See my updates. The above is just an example, in reality, things are more complicated. And in reality, there is a good reason to use the above map approach (50 ms for a full lookup compared to 3 seconds ... and that is a small configuration)

    – GhostCat
    Nov 22 '18 at 9:07














5












5








5


1






First, a bit of context code:



import java.util.*;
import java.util.concurrent.atomic.DoubleAdder;
import java.util.function.Function;
import java.util.stream.Collectors;

class Scratch {

static enum Id {A, B, C}
static class IdWrapper {
private final Id id;
public IdWrapper(Id id) {this.id = id;}
Id getId() { return id; }
}

public static void main(String args) {
Map<String, Object> v1 = new HashMap<>();
v1.put("parents", new HashSet<>(Arrays.asList(new IdWrapper(Id.A), new IdWrapper(Id.B))));
v1.put("size", 1d);

Map<String, Object> v2 = new HashMap<>();
v2.put("parents", new HashSet<>(Arrays.asList(new IdWrapper(Id.B), new IdWrapper(Id.C))));
v2.put("size", 2d);

Map<String, Map<String, Object>> allVs = new HashMap<>();
allVs.put("v1", v1);
allVs.put("v2", v2);


The above represents the data structure I am dealing with. I have an outer map (key type is irrelevant), that contains inner "property maps" as values. These inner maps use strings to lookup different kind of data.



In the case I am working on, each v1, v2,... represents a "disk". Each disk has a specific size, but can have multiple parents.



Now I need to sum up the sizes per parent Id as Map<Id, Double>.
For the above example, that map would be {B=3.0, A=1.0, C=2.0}.



The following code gives the expected result:



    HashMap<Id, DoubleAdder> adders = new HashMap<>();
allVs.values().forEach(m -> {
double size = (Double) m.get("size");
Set<IdWrapper> wrappedIds = (Set<IdWrapper>) m.get("parents");
wrappedIds.forEach(w -> adders.computeIfAbsent(w.getId(), a -> new DoubleAdder()).add(size));
});

System.out.println(adders.keySet().stream()
.collect(Collectors.toMap(Function.identity(), key -> adders.get(key).doubleValue())));


But the code feels pretty clunky (like the fact that I need a second map for adding up the sizes).



I have a similar case, where there is always exactly one parent, and that can easily be solved using



collect(Collectors.groupingBy(...), Collectors.summingDouble(...);


But I am lost for the "multiple" parents case.



So, question: can the above transformation to compute the required Map<Id, Double> be rewritten using groupingBy()?



And just for the record: the above is just a mcve for the problem I need an answer for. I understand that the "data layout" might look strange. In reality, we actually have distinct classes representing these "disks" for example. But our "framework" also allows for accessing the properties of any object within the database using such IDs and property names. And sometimes, when you have performance issues, then fetching data in such a "raw property map" way is orders of magnitude faster compared to accessing the true "disk" objects themselves. In other words: I can't change anything about the context. My question is solely about rewriting that computation.



( I am constrained to Java8 and "standard" Java libraries, but additional answers for newer versions Java or nice non-standard ways of solving this will be appreciated, too )










share|improve this question
















First, a bit of context code:



import java.util.*;
import java.util.concurrent.atomic.DoubleAdder;
import java.util.function.Function;
import java.util.stream.Collectors;

class Scratch {

static enum Id {A, B, C}
static class IdWrapper {
private final Id id;
public IdWrapper(Id id) {this.id = id;}
Id getId() { return id; }
}

public static void main(String args) {
Map<String, Object> v1 = new HashMap<>();
v1.put("parents", new HashSet<>(Arrays.asList(new IdWrapper(Id.A), new IdWrapper(Id.B))));
v1.put("size", 1d);

Map<String, Object> v2 = new HashMap<>();
v2.put("parents", new HashSet<>(Arrays.asList(new IdWrapper(Id.B), new IdWrapper(Id.C))));
v2.put("size", 2d);

Map<String, Map<String, Object>> allVs = new HashMap<>();
allVs.put("v1", v1);
allVs.put("v2", v2);


The above represents the data structure I am dealing with. I have an outer map (key type is irrelevant), that contains inner "property maps" as values. These inner maps use strings to lookup different kind of data.



In the case I am working on, each v1, v2,... represents a "disk". Each disk has a specific size, but can have multiple parents.



Now I need to sum up the sizes per parent Id as Map<Id, Double>.
For the above example, that map would be {B=3.0, A=1.0, C=2.0}.



The following code gives the expected result:



    HashMap<Id, DoubleAdder> adders = new HashMap<>();
allVs.values().forEach(m -> {
double size = (Double) m.get("size");
Set<IdWrapper> wrappedIds = (Set<IdWrapper>) m.get("parents");
wrappedIds.forEach(w -> adders.computeIfAbsent(w.getId(), a -> new DoubleAdder()).add(size));
});

System.out.println(adders.keySet().stream()
.collect(Collectors.toMap(Function.identity(), key -> adders.get(key).doubleValue())));


But the code feels pretty clunky (like the fact that I need a second map for adding up the sizes).



I have a similar case, where there is always exactly one parent, and that can easily be solved using



collect(Collectors.groupingBy(...), Collectors.summingDouble(...);


But I am lost for the "multiple" parents case.



So, question: can the above transformation to compute the required Map<Id, Double> be rewritten using groupingBy()?



And just for the record: the above is just a mcve for the problem I need an answer for. I understand that the "data layout" might look strange. In reality, we actually have distinct classes representing these "disks" for example. But our "framework" also allows for accessing the properties of any object within the database using such IDs and property names. And sometimes, when you have performance issues, then fetching data in such a "raw property map" way is orders of magnitude faster compared to accessing the true "disk" objects themselves. In other words: I can't change anything about the context. My question is solely about rewriting that computation.



( I am constrained to Java8 and "standard" Java libraries, but additional answers for newer versions Java or nice non-standard ways of solving this will be appreciated, too )







java java-8 java-stream grouping






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 22 '18 at 9:31









Holger

171k23248467




171k23248467










asked Nov 22 '18 at 8:39









GhostCatGhostCat

95.8k1794161




95.8k1794161













  • In the case I am working on, each v1, v2,... represents a "disk" then create a Disk class with two typed properties instead of using a Map.

    – JB Nizet
    Nov 22 '18 at 8:43











  • @JBNizet See my updates. The above is just an example, in reality, things are more complicated. And in reality, there is a good reason to use the above map approach (50 ms for a full lookup compared to 3 seconds ... and that is a small configuration)

    – GhostCat
    Nov 22 '18 at 9:07



















  • In the case I am working on, each v1, v2,... represents a "disk" then create a Disk class with two typed properties instead of using a Map.

    – JB Nizet
    Nov 22 '18 at 8:43











  • @JBNizet See my updates. The above is just an example, in reality, things are more complicated. And in reality, there is a good reason to use the above map approach (50 ms for a full lookup compared to 3 seconds ... and that is a small configuration)

    – GhostCat
    Nov 22 '18 at 9:07

















In the case I am working on, each v1, v2,... represents a "disk" then create a Disk class with two typed properties instead of using a Map.

– JB Nizet
Nov 22 '18 at 8:43





In the case I am working on, each v1, v2,... represents a "disk" then create a Disk class with two typed properties instead of using a Map.

– JB Nizet
Nov 22 '18 at 8:43













@JBNizet See my updates. The above is just an example, in reality, things are more complicated. And in reality, there is a good reason to use the above map approach (50 ms for a full lookup compared to 3 seconds ... and that is a small configuration)

– GhostCat
Nov 22 '18 at 9:07





@JBNizet See my updates. The above is just an example, in reality, things are more complicated. And in reality, there is a good reason to use the above map approach (50 ms for a full lookup compared to 3 seconds ... and that is a small configuration)

– GhostCat
Nov 22 '18 at 9:07












1 Answer
1






active

oldest

votes


















5














Here's a single stream pipeline solution:



Map<Id,Double> sums = allVs.values ()
.stream ()
.flatMap (m -> ((Set<IdWrapper>)m.get ("parents")).stream ()
.map (i -> new SimpleEntry<Id,Double>(i.getId(),(Double)m.get ("size"))))
.collect (Collectors.groupingBy (Map.Entry::getKey,
Collectors.summingDouble (Map.Entry::getValue)));


Output:



{B=3.0, A=1.0, C=2.0}


The idea is to convert each inner Map to a Stream of entries where the key is an Id (of the "parents" Set) and the value is the corresponding "size".



Then it's easy to group the Stream into the desired output.






share|improve this answer
























  • I had that idea as well, but couldn't see how to get there!

    – GhostCat
    Nov 22 '18 at 9:01






  • 2





    For larger data sets, it might be worth doing the (Double)m.get ("size") only once before streaming over (Set<IdWrapper>)m.get ("parents") instead of repeating it for every element.

    – Holger
    Nov 22 '18 at 9:21






  • 1





    @Holger that seems like a small optimization, given these inner Maps only have 2 keys. If the actual Maps are larger, it might be helpful, though not much (since get has constant expected time anyway).

    – Eran
    Nov 22 '18 at 9:27






  • 1





    The inner objects are Sets, not Maps, but yes, I already said “for larger data sets”. The operation has constant time, still hashing is not necessarily cheap (the type cast is not so much a problem) and it’s multiplied with the number of set elements.

    – Holger
    Nov 22 '18 at 12:30











  • @Holger I was referring to the inner Map<String, Object>, not to the Set<IdWrapper>s within those Maps.

    – Eran
    Nov 22 '18 at 15:08














Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53426852%2fis-there-a-way-to-collect-a-map-using-groupingby-for-multiple-elements-within%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









5














Here's a single stream pipeline solution:



Map<Id,Double> sums = allVs.values ()
.stream ()
.flatMap (m -> ((Set<IdWrapper>)m.get ("parents")).stream ()
.map (i -> new SimpleEntry<Id,Double>(i.getId(),(Double)m.get ("size"))))
.collect (Collectors.groupingBy (Map.Entry::getKey,
Collectors.summingDouble (Map.Entry::getValue)));


Output:



{B=3.0, A=1.0, C=2.0}


The idea is to convert each inner Map to a Stream of entries where the key is an Id (of the "parents" Set) and the value is the corresponding "size".



Then it's easy to group the Stream into the desired output.






share|improve this answer
























  • I had that idea as well, but couldn't see how to get there!

    – GhostCat
    Nov 22 '18 at 9:01






  • 2





    For larger data sets, it might be worth doing the (Double)m.get ("size") only once before streaming over (Set<IdWrapper>)m.get ("parents") instead of repeating it for every element.

    – Holger
    Nov 22 '18 at 9:21






  • 1





    @Holger that seems like a small optimization, given these inner Maps only have 2 keys. If the actual Maps are larger, it might be helpful, though not much (since get has constant expected time anyway).

    – Eran
    Nov 22 '18 at 9:27






  • 1





    The inner objects are Sets, not Maps, but yes, I already said “for larger data sets”. The operation has constant time, still hashing is not necessarily cheap (the type cast is not so much a problem) and it’s multiplied with the number of set elements.

    – Holger
    Nov 22 '18 at 12:30











  • @Holger I was referring to the inner Map<String, Object>, not to the Set<IdWrapper>s within those Maps.

    – Eran
    Nov 22 '18 at 15:08


















5














Here's a single stream pipeline solution:



Map<Id,Double> sums = allVs.values ()
.stream ()
.flatMap (m -> ((Set<IdWrapper>)m.get ("parents")).stream ()
.map (i -> new SimpleEntry<Id,Double>(i.getId(),(Double)m.get ("size"))))
.collect (Collectors.groupingBy (Map.Entry::getKey,
Collectors.summingDouble (Map.Entry::getValue)));


Output:



{B=3.0, A=1.0, C=2.0}


The idea is to convert each inner Map to a Stream of entries where the key is an Id (of the "parents" Set) and the value is the corresponding "size".



Then it's easy to group the Stream into the desired output.






share|improve this answer
























  • I had that idea as well, but couldn't see how to get there!

    – GhostCat
    Nov 22 '18 at 9:01






  • 2





    For larger data sets, it might be worth doing the (Double)m.get ("size") only once before streaming over (Set<IdWrapper>)m.get ("parents") instead of repeating it for every element.

    – Holger
    Nov 22 '18 at 9:21






  • 1





    @Holger that seems like a small optimization, given these inner Maps only have 2 keys. If the actual Maps are larger, it might be helpful, though not much (since get has constant expected time anyway).

    – Eran
    Nov 22 '18 at 9:27






  • 1





    The inner objects are Sets, not Maps, but yes, I already said “for larger data sets”. The operation has constant time, still hashing is not necessarily cheap (the type cast is not so much a problem) and it’s multiplied with the number of set elements.

    – Holger
    Nov 22 '18 at 12:30











  • @Holger I was referring to the inner Map<String, Object>, not to the Set<IdWrapper>s within those Maps.

    – Eran
    Nov 22 '18 at 15:08
















5












5








5







Here's a single stream pipeline solution:



Map<Id,Double> sums = allVs.values ()
.stream ()
.flatMap (m -> ((Set<IdWrapper>)m.get ("parents")).stream ()
.map (i -> new SimpleEntry<Id,Double>(i.getId(),(Double)m.get ("size"))))
.collect (Collectors.groupingBy (Map.Entry::getKey,
Collectors.summingDouble (Map.Entry::getValue)));


Output:



{B=3.0, A=1.0, C=2.0}


The idea is to convert each inner Map to a Stream of entries where the key is an Id (of the "parents" Set) and the value is the corresponding "size".



Then it's easy to group the Stream into the desired output.






share|improve this answer













Here's a single stream pipeline solution:



Map<Id,Double> sums = allVs.values ()
.stream ()
.flatMap (m -> ((Set<IdWrapper>)m.get ("parents")).stream ()
.map (i -> new SimpleEntry<Id,Double>(i.getId(),(Double)m.get ("size"))))
.collect (Collectors.groupingBy (Map.Entry::getKey,
Collectors.summingDouble (Map.Entry::getValue)));


Output:



{B=3.0, A=1.0, C=2.0}


The idea is to convert each inner Map to a Stream of entries where the key is an Id (of the "parents" Set) and the value is the corresponding "size".



Then it's easy to group the Stream into the desired output.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 22 '18 at 8:53









EranEran

292k37481564




292k37481564













  • I had that idea as well, but couldn't see how to get there!

    – GhostCat
    Nov 22 '18 at 9:01






  • 2





    For larger data sets, it might be worth doing the (Double)m.get ("size") only once before streaming over (Set<IdWrapper>)m.get ("parents") instead of repeating it for every element.

    – Holger
    Nov 22 '18 at 9:21






  • 1





    @Holger that seems like a small optimization, given these inner Maps only have 2 keys. If the actual Maps are larger, it might be helpful, though not much (since get has constant expected time anyway).

    – Eran
    Nov 22 '18 at 9:27






  • 1





    The inner objects are Sets, not Maps, but yes, I already said “for larger data sets”. The operation has constant time, still hashing is not necessarily cheap (the type cast is not so much a problem) and it’s multiplied with the number of set elements.

    – Holger
    Nov 22 '18 at 12:30











  • @Holger I was referring to the inner Map<String, Object>, not to the Set<IdWrapper>s within those Maps.

    – Eran
    Nov 22 '18 at 15:08





















  • I had that idea as well, but couldn't see how to get there!

    – GhostCat
    Nov 22 '18 at 9:01






  • 2





    For larger data sets, it might be worth doing the (Double)m.get ("size") only once before streaming over (Set<IdWrapper>)m.get ("parents") instead of repeating it for every element.

    – Holger
    Nov 22 '18 at 9:21






  • 1





    @Holger that seems like a small optimization, given these inner Maps only have 2 keys. If the actual Maps are larger, it might be helpful, though not much (since get has constant expected time anyway).

    – Eran
    Nov 22 '18 at 9:27






  • 1





    The inner objects are Sets, not Maps, but yes, I already said “for larger data sets”. The operation has constant time, still hashing is not necessarily cheap (the type cast is not so much a problem) and it’s multiplied with the number of set elements.

    – Holger
    Nov 22 '18 at 12:30











  • @Holger I was referring to the inner Map<String, Object>, not to the Set<IdWrapper>s within those Maps.

    – Eran
    Nov 22 '18 at 15:08



















I had that idea as well, but couldn't see how to get there!

– GhostCat
Nov 22 '18 at 9:01





I had that idea as well, but couldn't see how to get there!

– GhostCat
Nov 22 '18 at 9:01




2




2





For larger data sets, it might be worth doing the (Double)m.get ("size") only once before streaming over (Set<IdWrapper>)m.get ("parents") instead of repeating it for every element.

– Holger
Nov 22 '18 at 9:21





For larger data sets, it might be worth doing the (Double)m.get ("size") only once before streaming over (Set<IdWrapper>)m.get ("parents") instead of repeating it for every element.

– Holger
Nov 22 '18 at 9:21




1




1





@Holger that seems like a small optimization, given these inner Maps only have 2 keys. If the actual Maps are larger, it might be helpful, though not much (since get has constant expected time anyway).

– Eran
Nov 22 '18 at 9:27





@Holger that seems like a small optimization, given these inner Maps only have 2 keys. If the actual Maps are larger, it might be helpful, though not much (since get has constant expected time anyway).

– Eran
Nov 22 '18 at 9:27




1




1





The inner objects are Sets, not Maps, but yes, I already said “for larger data sets”. The operation has constant time, still hashing is not necessarily cheap (the type cast is not so much a problem) and it’s multiplied with the number of set elements.

– Holger
Nov 22 '18 at 12:30





The inner objects are Sets, not Maps, but yes, I already said “for larger data sets”. The operation has constant time, still hashing is not necessarily cheap (the type cast is not so much a problem) and it’s multiplied with the number of set elements.

– Holger
Nov 22 '18 at 12:30













@Holger I was referring to the inner Map<String, Object>, not to the Set<IdWrapper>s within those Maps.

– Eran
Nov 22 '18 at 15:08







@Holger I was referring to the inner Map<String, Object>, not to the Set<IdWrapper>s within those Maps.

– Eran
Nov 22 '18 at 15:08






















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53426852%2fis-there-a-way-to-collect-a-map-using-groupingby-for-multiple-elements-within%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

ComboBox Display Member on multiple fields

Is it possible to collect Nectar points via Trainline?