derivation of return type based on max range of input possible in C++












2














I was recently asked this question in an interview of C++ where I
was asked to improve the below piece of code which fails when
adding two int's results in the result being long and return
type needs accordingly to be derived.



Here the below code fails because the decltype() based derivation is not intelligent enough to identify based on the actual range of values of input but the type and derives return type as same. Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.



How can this be generalized any hints or clues?



I feel that decltype() won't be helpful here.



#include<iostream>
#include<string>
#include<climits>

using namespace std;

template<typename T> auto adder(const T& i1, const T& i2) -> decltype(i1+i2)
{
return(i1+i2);
}

int main(int argc, char* argv)
{
cout << adder(INT_MAX-10, INT_MAX-3) << endl; // wrong.
cout << adder<long>(INT_MAX-10, INT_MAX-3) << endl; // correct!!.
return(0);
}









share|improve this question
























  • For class, you can use User-defined deduction guides;but for function, only explicit instantiation can specify the type you need, otherwise the compiler will deduct the template argument.
    – rsy56640
    Nov 18 '18 at 2:04
















2














I was recently asked this question in an interview of C++ where I
was asked to improve the below piece of code which fails when
adding two int's results in the result being long and return
type needs accordingly to be derived.



Here the below code fails because the decltype() based derivation is not intelligent enough to identify based on the actual range of values of input but the type and derives return type as same. Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.



How can this be generalized any hints or clues?



I feel that decltype() won't be helpful here.



#include<iostream>
#include<string>
#include<climits>

using namespace std;

template<typename T> auto adder(const T& i1, const T& i2) -> decltype(i1+i2)
{
return(i1+i2);
}

int main(int argc, char* argv)
{
cout << adder(INT_MAX-10, INT_MAX-3) << endl; // wrong.
cout << adder<long>(INT_MAX-10, INT_MAX-3) << endl; // correct!!.
return(0);
}









share|improve this question
























  • For class, you can use User-defined deduction guides;but for function, only explicit instantiation can specify the type you need, otherwise the compiler will deduct the template argument.
    – rsy56640
    Nov 18 '18 at 2:04














2












2








2







I was recently asked this question in an interview of C++ where I
was asked to improve the below piece of code which fails when
adding two int's results in the result being long and return
type needs accordingly to be derived.



Here the below code fails because the decltype() based derivation is not intelligent enough to identify based on the actual range of values of input but the type and derives return type as same. Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.



How can this be generalized any hints or clues?



I feel that decltype() won't be helpful here.



#include<iostream>
#include<string>
#include<climits>

using namespace std;

template<typename T> auto adder(const T& i1, const T& i2) -> decltype(i1+i2)
{
return(i1+i2);
}

int main(int argc, char* argv)
{
cout << adder(INT_MAX-10, INT_MAX-3) << endl; // wrong.
cout << adder<long>(INT_MAX-10, INT_MAX-3) << endl; // correct!!.
return(0);
}









share|improve this question















I was recently asked this question in an interview of C++ where I
was asked to improve the below piece of code which fails when
adding two int's results in the result being long and return
type needs accordingly to be derived.



Here the below code fails because the decltype() based derivation is not intelligent enough to identify based on the actual range of values of input but the type and derives return type as same. Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.



How can this be generalized any hints or clues?



I feel that decltype() won't be helpful here.



#include<iostream>
#include<string>
#include<climits>

using namespace std;

template<typename T> auto adder(const T& i1, const T& i2) -> decltype(i1+i2)
{
return(i1+i2);
}

int main(int argc, char* argv)
{
cout << adder(INT_MAX-10, INT_MAX-3) << endl; // wrong.
cout << adder<long>(INT_MAX-10, INT_MAX-3) << endl; // correct!!.
return(0);
}






c++11 templates template-meta-programming integer-overflow template-deduction






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 18 '18 at 3:51









max66

34.6k63762




34.6k63762










asked Nov 18 '18 at 1:46









Anand Kulkarni

748




748












  • For class, you can use User-defined deduction guides;but for function, only explicit instantiation can specify the type you need, otherwise the compiler will deduct the template argument.
    – rsy56640
    Nov 18 '18 at 2:04


















  • For class, you can use User-defined deduction guides;but for function, only explicit instantiation can specify the type you need, otherwise the compiler will deduct the template argument.
    – rsy56640
    Nov 18 '18 at 2:04
















For class, you can use User-defined deduction guides;but for function, only explicit instantiation can specify the type you need, otherwise the compiler will deduct the template argument.
– rsy56640
Nov 18 '18 at 2:04




For class, you can use User-defined deduction guides;but for function, only explicit instantiation can specify the type you need, otherwise the compiler will deduct the template argument.
– rsy56640
Nov 18 '18 at 2:04












1 Answer
1






active

oldest

votes


















3















Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.




Not so simple.



If T is int, you're non sure that long is enough.



The standard say only that



1) the number of bits for int (sizeof(int) * CHAR_BIT) is at least 16



2) the number of bits for long (sizeof(long) * CHAR_BIT) is at least 32



3) sizeof(int) <= sizeof(long)



So if a compiler manage a int with sizeof(int) == sizeof(long), this is perfectly legal and



adder<long>(INT_MAX-10, INT_MAX-3);


doesn't works because long can be not enough to contain (without overflow) the sum between two int's.



I don't see a simple and elegant solution.



The best that come in my mind is based on the fact that C++11 introduced the following types



1) std::int_least8_t, smallest integer type with at least 8 bits



2) std::int_least16_t, smallest integer type with at least 16 bits



3) std::int_least32_t, smallest integer type with at least 32 bits



4) std::int_least64_t, smallest integer type with at least 64 bits



C++11 also introduce std::intmax_t as the maximum width integer type.



So I propose the following template type selector



template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };


that, given a number of bits, define the corresponding smallest "at least" integer type.



I propose also the following using



template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;


that, given a type T, detect the smallest integer type that surely contain a sum between two T values (a integer with a number of bits that is at least the number of bits of T plus one).



So your adder() simply become



template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }


Observe that th returned value isn't simply



   return i1 + i2;


otherwise you return the correct type but with the wrong value: i1 + i2 is calculated as a T value so you can have overflow, then the sum is assigned to a typeNext<T> variable.



To avoid this problem, you have to initialize a typeNext<T> temporary variable with one of two values (typeNext<T>{i1}), then add the other (typeNext<T>{i1} + i2) obtaining a typeNext<T> value, finally return the computed value. This way the sum in calculated as a typeNext<T> sum and you doesn't have overflow.



The following is a full compiling example



#include <cstdint>
#include <climits>
#include <iostream>
#include <type_traits>

template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };

template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;

template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }

int main()
{
auto x = adder(INT_MAX-10, INT_MAX-3);

std::cout << "int: " << sizeof(int)*CHAR_BIT << std::endl;
std::cout << "long: " << sizeof(long)*CHAR_BIT << std::endl;
std::cout << "x: " << sizeof(x)*CHAR_BIT << std::endl;

std::cout << std::is_same<long, decltype(x)>::value << std::endl;
}


In my Linux 64bit platform, i get 32bit for int, 64bit for long and for x and also that long and decltype(x) are the same type.



But this is true for my platform; nothing guaranties that long and decltype(x) are ever the same.



Observe also that trying to get a type for the sum of two std::intmax_t's



 std::intmax_t  y {};

auto z = adder(y, y);


gives an error and doesn't compile because isn't defined a typeFor for a N bigger that sizeof(std::intmax_t)*CHAR_BIT.






share|improve this answer























  • Thank you max66, I will take a look and try to understand the approach. As you say it perhaps its black and white and there are more intercacies to it than simply int->long
    – Anand Kulkarni
    Nov 18 '18 at 4:23












  • @DeeptiKulkarni - Sorry: I've forgotten an important point. Answer modified: see the adder() description.
    – max66
    Nov 18 '18 at 10:37










  • I checked with interviewer later on a lighter note of chat and he explained that the above is good enough solution. All they needed was to cover int to long, double to long double. :)
    – Anand Kulkarni
    Nov 20 '18 at 11:50











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53357212%2fderivation-of-return-type-based-on-max-range-of-input-possible-in-c%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3















Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.




Not so simple.



If T is int, you're non sure that long is enough.



The standard say only that



1) the number of bits for int (sizeof(int) * CHAR_BIT) is at least 16



2) the number of bits for long (sizeof(long) * CHAR_BIT) is at least 32



3) sizeof(int) <= sizeof(long)



So if a compiler manage a int with sizeof(int) == sizeof(long), this is perfectly legal and



adder<long>(INT_MAX-10, INT_MAX-3);


doesn't works because long can be not enough to contain (without overflow) the sum between two int's.



I don't see a simple and elegant solution.



The best that come in my mind is based on the fact that C++11 introduced the following types



1) std::int_least8_t, smallest integer type with at least 8 bits



2) std::int_least16_t, smallest integer type with at least 16 bits



3) std::int_least32_t, smallest integer type with at least 32 bits



4) std::int_least64_t, smallest integer type with at least 64 bits



C++11 also introduce std::intmax_t as the maximum width integer type.



So I propose the following template type selector



template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };


that, given a number of bits, define the corresponding smallest "at least" integer type.



I propose also the following using



template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;


that, given a type T, detect the smallest integer type that surely contain a sum between two T values (a integer with a number of bits that is at least the number of bits of T plus one).



So your adder() simply become



template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }


Observe that th returned value isn't simply



   return i1 + i2;


otherwise you return the correct type but with the wrong value: i1 + i2 is calculated as a T value so you can have overflow, then the sum is assigned to a typeNext<T> variable.



To avoid this problem, you have to initialize a typeNext<T> temporary variable with one of two values (typeNext<T>{i1}), then add the other (typeNext<T>{i1} + i2) obtaining a typeNext<T> value, finally return the computed value. This way the sum in calculated as a typeNext<T> sum and you doesn't have overflow.



The following is a full compiling example



#include <cstdint>
#include <climits>
#include <iostream>
#include <type_traits>

template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };

template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;

template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }

int main()
{
auto x = adder(INT_MAX-10, INT_MAX-3);

std::cout << "int: " << sizeof(int)*CHAR_BIT << std::endl;
std::cout << "long: " << sizeof(long)*CHAR_BIT << std::endl;
std::cout << "x: " << sizeof(x)*CHAR_BIT << std::endl;

std::cout << std::is_same<long, decltype(x)>::value << std::endl;
}


In my Linux 64bit platform, i get 32bit for int, 64bit for long and for x and also that long and decltype(x) are the same type.



But this is true for my platform; nothing guaranties that long and decltype(x) are ever the same.



Observe also that trying to get a type for the sum of two std::intmax_t's



 std::intmax_t  y {};

auto z = adder(y, y);


gives an error and doesn't compile because isn't defined a typeFor for a N bigger that sizeof(std::intmax_t)*CHAR_BIT.






share|improve this answer























  • Thank you max66, I will take a look and try to understand the approach. As you say it perhaps its black and white and there are more intercacies to it than simply int->long
    – Anand Kulkarni
    Nov 18 '18 at 4:23












  • @DeeptiKulkarni - Sorry: I've forgotten an important point. Answer modified: see the adder() description.
    – max66
    Nov 18 '18 at 10:37










  • I checked with interviewer later on a lighter note of chat and he explained that the above is good enough solution. All they needed was to cover int to long, double to long double. :)
    – Anand Kulkarni
    Nov 20 '18 at 11:50
















3















Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.




Not so simple.



If T is int, you're non sure that long is enough.



The standard say only that



1) the number of bits for int (sizeof(int) * CHAR_BIT) is at least 16



2) the number of bits for long (sizeof(long) * CHAR_BIT) is at least 32



3) sizeof(int) <= sizeof(long)



So if a compiler manage a int with sizeof(int) == sizeof(long), this is perfectly legal and



adder<long>(INT_MAX-10, INT_MAX-3);


doesn't works because long can be not enough to contain (without overflow) the sum between two int's.



I don't see a simple and elegant solution.



The best that come in my mind is based on the fact that C++11 introduced the following types



1) std::int_least8_t, smallest integer type with at least 8 bits



2) std::int_least16_t, smallest integer type with at least 16 bits



3) std::int_least32_t, smallest integer type with at least 32 bits



4) std::int_least64_t, smallest integer type with at least 64 bits



C++11 also introduce std::intmax_t as the maximum width integer type.



So I propose the following template type selector



template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };


that, given a number of bits, define the corresponding smallest "at least" integer type.



I propose also the following using



template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;


that, given a type T, detect the smallest integer type that surely contain a sum between two T values (a integer with a number of bits that is at least the number of bits of T plus one).



So your adder() simply become



template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }


Observe that th returned value isn't simply



   return i1 + i2;


otherwise you return the correct type but with the wrong value: i1 + i2 is calculated as a T value so you can have overflow, then the sum is assigned to a typeNext<T> variable.



To avoid this problem, you have to initialize a typeNext<T> temporary variable with one of two values (typeNext<T>{i1}), then add the other (typeNext<T>{i1} + i2) obtaining a typeNext<T> value, finally return the computed value. This way the sum in calculated as a typeNext<T> sum and you doesn't have overflow.



The following is a full compiling example



#include <cstdint>
#include <climits>
#include <iostream>
#include <type_traits>

template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };

template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;

template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }

int main()
{
auto x = adder(INT_MAX-10, INT_MAX-3);

std::cout << "int: " << sizeof(int)*CHAR_BIT << std::endl;
std::cout << "long: " << sizeof(long)*CHAR_BIT << std::endl;
std::cout << "x: " << sizeof(x)*CHAR_BIT << std::endl;

std::cout << std::is_same<long, decltype(x)>::value << std::endl;
}


In my Linux 64bit platform, i get 32bit for int, 64bit for long and for x and also that long and decltype(x) are the same type.



But this is true for my platform; nothing guaranties that long and decltype(x) are ever the same.



Observe also that trying to get a type for the sum of two std::intmax_t's



 std::intmax_t  y {};

auto z = adder(y, y);


gives an error and doesn't compile because isn't defined a typeFor for a N bigger that sizeof(std::intmax_t)*CHAR_BIT.






share|improve this answer























  • Thank you max66, I will take a look and try to understand the approach. As you say it perhaps its black and white and there are more intercacies to it than simply int->long
    – Anand Kulkarni
    Nov 18 '18 at 4:23












  • @DeeptiKulkarni - Sorry: I've forgotten an important point. Answer modified: see the adder() description.
    – max66
    Nov 18 '18 at 10:37










  • I checked with interviewer later on a lighter note of chat and he explained that the above is good enough solution. All they needed was to cover int to long, double to long double. :)
    – Anand Kulkarni
    Nov 20 '18 at 11:50














3












3








3







Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.




Not so simple.



If T is int, you're non sure that long is enough.



The standard say only that



1) the number of bits for int (sizeof(int) * CHAR_BIT) is at least 16



2) the number of bits for long (sizeof(long) * CHAR_BIT) is at least 32



3) sizeof(int) <= sizeof(long)



So if a compiler manage a int with sizeof(int) == sizeof(long), this is perfectly legal and



adder<long>(INT_MAX-10, INT_MAX-3);


doesn't works because long can be not enough to contain (without overflow) the sum between two int's.



I don't see a simple and elegant solution.



The best that come in my mind is based on the fact that C++11 introduced the following types



1) std::int_least8_t, smallest integer type with at least 8 bits



2) std::int_least16_t, smallest integer type with at least 16 bits



3) std::int_least32_t, smallest integer type with at least 32 bits



4) std::int_least64_t, smallest integer type with at least 64 bits



C++11 also introduce std::intmax_t as the maximum width integer type.



So I propose the following template type selector



template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };


that, given a number of bits, define the corresponding smallest "at least" integer type.



I propose also the following using



template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;


that, given a type T, detect the smallest integer type that surely contain a sum between two T values (a integer with a number of bits that is at least the number of bits of T plus one).



So your adder() simply become



template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }


Observe that th returned value isn't simply



   return i1 + i2;


otherwise you return the correct type but with the wrong value: i1 + i2 is calculated as a T value so you can have overflow, then the sum is assigned to a typeNext<T> variable.



To avoid this problem, you have to initialize a typeNext<T> temporary variable with one of two values (typeNext<T>{i1}), then add the other (typeNext<T>{i1} + i2) obtaining a typeNext<T> value, finally return the computed value. This way the sum in calculated as a typeNext<T> sum and you doesn't have overflow.



The following is a full compiling example



#include <cstdint>
#include <climits>
#include <iostream>
#include <type_traits>

template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };

template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;

template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }

int main()
{
auto x = adder(INT_MAX-10, INT_MAX-3);

std::cout << "int: " << sizeof(int)*CHAR_BIT << std::endl;
std::cout << "long: " << sizeof(long)*CHAR_BIT << std::endl;
std::cout << "x: " << sizeof(x)*CHAR_BIT << std::endl;

std::cout << std::is_same<long, decltype(x)>::value << std::endl;
}


In my Linux 64bit platform, i get 32bit for int, 64bit for long and for x and also that long and decltype(x) are the same type.



But this is true for my platform; nothing guaranties that long and decltype(x) are ever the same.



Observe also that trying to get a type for the sum of two std::intmax_t's



 std::intmax_t  y {};

auto z = adder(y, y);


gives an error and doesn't compile because isn't defined a typeFor for a N bigger that sizeof(std::intmax_t)*CHAR_BIT.






share|improve this answer















Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.




Not so simple.



If T is int, you're non sure that long is enough.



The standard say only that



1) the number of bits for int (sizeof(int) * CHAR_BIT) is at least 16



2) the number of bits for long (sizeof(long) * CHAR_BIT) is at least 32



3) sizeof(int) <= sizeof(long)



So if a compiler manage a int with sizeof(int) == sizeof(long), this is perfectly legal and



adder<long>(INT_MAX-10, INT_MAX-3);


doesn't works because long can be not enough to contain (without overflow) the sum between two int's.



I don't see a simple and elegant solution.



The best that come in my mind is based on the fact that C++11 introduced the following types



1) std::int_least8_t, smallest integer type with at least 8 bits



2) std::int_least16_t, smallest integer type with at least 16 bits



3) std::int_least32_t, smallest integer type with at least 32 bits



4) std::int_least64_t, smallest integer type with at least 64 bits



C++11 also introduce std::intmax_t as the maximum width integer type.



So I propose the following template type selector



template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };


that, given a number of bits, define the corresponding smallest "at least" integer type.



I propose also the following using



template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;


that, given a type T, detect the smallest integer type that surely contain a sum between two T values (a integer with a number of bits that is at least the number of bits of T plus one).



So your adder() simply become



template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }


Observe that th returned value isn't simply



   return i1 + i2;


otherwise you return the correct type but with the wrong value: i1 + i2 is calculated as a T value so you can have overflow, then the sum is assigned to a typeNext<T> variable.



To avoid this problem, you have to initialize a typeNext<T> temporary variable with one of two values (typeNext<T>{i1}), then add the other (typeNext<T>{i1} + i2) obtaining a typeNext<T> value, finally return the computed value. This way the sum in calculated as a typeNext<T> sum and you doesn't have overflow.



The following is a full compiling example



#include <cstdint>
#include <climits>
#include <iostream>
#include <type_traits>

template <std::size_t N, typename = std::true_type>
struct typeFor;

/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };

template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };

template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;

template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }

int main()
{
auto x = adder(INT_MAX-10, INT_MAX-3);

std::cout << "int: " << sizeof(int)*CHAR_BIT << std::endl;
std::cout << "long: " << sizeof(long)*CHAR_BIT << std::endl;
std::cout << "x: " << sizeof(x)*CHAR_BIT << std::endl;

std::cout << std::is_same<long, decltype(x)>::value << std::endl;
}


In my Linux 64bit platform, i get 32bit for int, 64bit for long and for x and also that long and decltype(x) are the same type.



But this is true for my platform; nothing guaranties that long and decltype(x) are ever the same.



Observe also that trying to get a type for the sum of two std::intmax_t's



 std::intmax_t  y {};

auto z = adder(y, y);


gives an error and doesn't compile because isn't defined a typeFor for a N bigger that sizeof(std::intmax_t)*CHAR_BIT.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 18 '18 at 10:36

























answered Nov 18 '18 at 3:38









max66

34.6k63762




34.6k63762












  • Thank you max66, I will take a look and try to understand the approach. As you say it perhaps its black and white and there are more intercacies to it than simply int->long
    – Anand Kulkarni
    Nov 18 '18 at 4:23












  • @DeeptiKulkarni - Sorry: I've forgotten an important point. Answer modified: see the adder() description.
    – max66
    Nov 18 '18 at 10:37










  • I checked with interviewer later on a lighter note of chat and he explained that the above is good enough solution. All they needed was to cover int to long, double to long double. :)
    – Anand Kulkarni
    Nov 20 '18 at 11:50


















  • Thank you max66, I will take a look and try to understand the approach. As you say it perhaps its black and white and there are more intercacies to it than simply int->long
    – Anand Kulkarni
    Nov 18 '18 at 4:23












  • @DeeptiKulkarni - Sorry: I've forgotten an important point. Answer modified: see the adder() description.
    – max66
    Nov 18 '18 at 10:37










  • I checked with interviewer later on a lighter note of chat and he explained that the above is good enough solution. All they needed was to cover int to long, double to long double. :)
    – Anand Kulkarni
    Nov 20 '18 at 11:50
















Thank you max66, I will take a look and try to understand the approach. As you say it perhaps its black and white and there are more intercacies to it than simply int->long
– Anand Kulkarni
Nov 18 '18 at 4:23






Thank you max66, I will take a look and try to understand the approach. As you say it perhaps its black and white and there are more intercacies to it than simply int->long
– Anand Kulkarni
Nov 18 '18 at 4:23














@DeeptiKulkarni - Sorry: I've forgotten an important point. Answer modified: see the adder() description.
– max66
Nov 18 '18 at 10:37




@DeeptiKulkarni - Sorry: I've forgotten an important point. Answer modified: see the adder() description.
– max66
Nov 18 '18 at 10:37












I checked with interviewer later on a lighter note of chat and he explained that the above is good enough solution. All they needed was to cover int to long, double to long double. :)
– Anand Kulkarni
Nov 20 '18 at 11:50




I checked with interviewer later on a lighter note of chat and he explained that the above is good enough solution. All they needed was to cover int to long, double to long double. :)
– Anand Kulkarni
Nov 20 '18 at 11:50


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53357212%2fderivation-of-return-type-based-on-max-range-of-input-possible-in-c%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

How to change which sound is reproduced for terminal bell?

Title Spacing in Bjornstrup Chapter, Removing Chapter Number From Contents

Can I use Tabulator js library in my java Spring + Thymeleaf project?