aws s3 copy fails with: 'Remote end closed connection without response'












0















The last step of my lambda function is to copy a temporary s3 key to its final name (which may or may not exist). That copy succeeds most of the time, but can fail, with:




ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))




File "/var/task/main.py", line 217, in _s3_copy
s3cli.copy_object(Bucket=dst_bucketname, Key=dst_keyname, **xtra)
File "/var/runtime/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 599, in _make_api_call
operation_model, request_dict)
File "/var/runtime/botocore/endpoint.py", line 148, in make_request
return self._send_request(request_dict, operation_model)
File "/var/runtime/botocore/endpoint.py", line 177, in _send_request
success_response, exception):
File "/var/runtime/botocore/endpoint.py", line 273, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/var/runtime/botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "/var/runtime/botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "/var/runtime/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/var/runtime/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/var/runtime/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/var/runtime/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/var/runtime/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/var/runtime/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
proxies=self.proxies, timeout=self.timeout)
File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/var/runtime/botocore/vendored/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request)
botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))


What is the remedy for this? Retries? Or different client retry/timeout settings?



Note: In the lambda, this is running boto3 1.7.74 and botocore 1.10.74. The files range in size, but they are in the 2-4GiB range.










share|improve this question





























    0















    The last step of my lambda function is to copy a temporary s3 key to its final name (which may or may not exist). That copy succeeds most of the time, but can fail, with:




    ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))




    File "/var/task/main.py", line 217, in _s3_copy
    s3cli.copy_object(Bucket=dst_bucketname, Key=dst_keyname, **xtra)
    File "/var/runtime/botocore/client.py", line 314, in _api_call
    return self._make_api_call(operation_name, kwargs)
    File "/var/runtime/botocore/client.py", line 599, in _make_api_call
    operation_model, request_dict)
    File "/var/runtime/botocore/endpoint.py", line 148, in make_request
    return self._send_request(request_dict, operation_model)
    File "/var/runtime/botocore/endpoint.py", line 177, in _send_request
    success_response, exception):
    File "/var/runtime/botocore/endpoint.py", line 273, in _needs_retry
    caught_exception=caught_exception, request_dict=request_dict)
    File "/var/runtime/botocore/hooks.py", line 227, in emit
    return self._emit(event_name, kwargs)
    File "/var/runtime/botocore/hooks.py", line 210, in _emit
    response = handler(**kwargs)
    File "/var/runtime/botocore/retryhandler.py", line 183, in __call__
    if self._checker(attempts, response, caught_exception):
    File "/var/runtime/botocore/retryhandler.py", line 251, in __call__
    caught_exception)
    File "/var/runtime/botocore/retryhandler.py", line 277, in _should_retry
    return self._checker(attempt_number, response, caught_exception)
    File "/var/runtime/botocore/retryhandler.py", line 317, in __call__
    caught_exception)
    File "/var/runtime/botocore/retryhandler.py", line 223, in __call__
    attempt_number, caught_exception)
    File "/var/runtime/botocore/retryhandler.py", line 359, in _check_caught_exception
    raise caught_exception
    File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
    proxies=self.proxies, timeout=self.timeout)
    File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
    r = adapter.send(request, **kwargs)
    File "/var/runtime/botocore/vendored/requests/adapters.py", line 415, in send
    raise ConnectionError(err, request=request)
    botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))


    What is the remedy for this? Retries? Or different client retry/timeout settings?



    Note: In the lambda, this is running boto3 1.7.74 and botocore 1.10.74. The files range in size, but they are in the 2-4GiB range.










    share|improve this question



























      0












      0








      0








      The last step of my lambda function is to copy a temporary s3 key to its final name (which may or may not exist). That copy succeeds most of the time, but can fail, with:




      ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))




      File "/var/task/main.py", line 217, in _s3_copy
      s3cli.copy_object(Bucket=dst_bucketname, Key=dst_keyname, **xtra)
      File "/var/runtime/botocore/client.py", line 314, in _api_call
      return self._make_api_call(operation_name, kwargs)
      File "/var/runtime/botocore/client.py", line 599, in _make_api_call
      operation_model, request_dict)
      File "/var/runtime/botocore/endpoint.py", line 148, in make_request
      return self._send_request(request_dict, operation_model)
      File "/var/runtime/botocore/endpoint.py", line 177, in _send_request
      success_response, exception):
      File "/var/runtime/botocore/endpoint.py", line 273, in _needs_retry
      caught_exception=caught_exception, request_dict=request_dict)
      File "/var/runtime/botocore/hooks.py", line 227, in emit
      return self._emit(event_name, kwargs)
      File "/var/runtime/botocore/hooks.py", line 210, in _emit
      response = handler(**kwargs)
      File "/var/runtime/botocore/retryhandler.py", line 183, in __call__
      if self._checker(attempts, response, caught_exception):
      File "/var/runtime/botocore/retryhandler.py", line 251, in __call__
      caught_exception)
      File "/var/runtime/botocore/retryhandler.py", line 277, in _should_retry
      return self._checker(attempt_number, response, caught_exception)
      File "/var/runtime/botocore/retryhandler.py", line 317, in __call__
      caught_exception)
      File "/var/runtime/botocore/retryhandler.py", line 223, in __call__
      attempt_number, caught_exception)
      File "/var/runtime/botocore/retryhandler.py", line 359, in _check_caught_exception
      raise caught_exception
      File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
      proxies=self.proxies, timeout=self.timeout)
      File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
      r = adapter.send(request, **kwargs)
      File "/var/runtime/botocore/vendored/requests/adapters.py", line 415, in send
      raise ConnectionError(err, request=request)
      botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))


      What is the remedy for this? Retries? Or different client retry/timeout settings?



      Note: In the lambda, this is running boto3 1.7.74 and botocore 1.10.74. The files range in size, but they are in the 2-4GiB range.










      share|improve this question
















      The last step of my lambda function is to copy a temporary s3 key to its final name (which may or may not exist). That copy succeeds most of the time, but can fail, with:




      ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))




      File "/var/task/main.py", line 217, in _s3_copy
      s3cli.copy_object(Bucket=dst_bucketname, Key=dst_keyname, **xtra)
      File "/var/runtime/botocore/client.py", line 314, in _api_call
      return self._make_api_call(operation_name, kwargs)
      File "/var/runtime/botocore/client.py", line 599, in _make_api_call
      operation_model, request_dict)
      File "/var/runtime/botocore/endpoint.py", line 148, in make_request
      return self._send_request(request_dict, operation_model)
      File "/var/runtime/botocore/endpoint.py", line 177, in _send_request
      success_response, exception):
      File "/var/runtime/botocore/endpoint.py", line 273, in _needs_retry
      caught_exception=caught_exception, request_dict=request_dict)
      File "/var/runtime/botocore/hooks.py", line 227, in emit
      return self._emit(event_name, kwargs)
      File "/var/runtime/botocore/hooks.py", line 210, in _emit
      response = handler(**kwargs)
      File "/var/runtime/botocore/retryhandler.py", line 183, in __call__
      if self._checker(attempts, response, caught_exception):
      File "/var/runtime/botocore/retryhandler.py", line 251, in __call__
      caught_exception)
      File "/var/runtime/botocore/retryhandler.py", line 277, in _should_retry
      return self._checker(attempt_number, response, caught_exception)
      File "/var/runtime/botocore/retryhandler.py", line 317, in __call__
      caught_exception)
      File "/var/runtime/botocore/retryhandler.py", line 223, in __call__
      attempt_number, caught_exception)
      File "/var/runtime/botocore/retryhandler.py", line 359, in _check_caught_exception
      raise caught_exception
      File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
      proxies=self.proxies, timeout=self.timeout)
      File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
      r = adapter.send(request, **kwargs)
      File "/var/runtime/botocore/vendored/requests/adapters.py", line 415, in send
      raise ConnectionError(err, request=request)
      botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))


      What is the remedy for this? Retries? Or different client retry/timeout settings?



      Note: In the lambda, this is running boto3 1.7.74 and botocore 1.10.74. The files range in size, but they are in the 2-4GiB range.







      amazon-web-services amazon-s3 aws-lambda






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 20 '18 at 19:29







      init_js

















      asked Nov 20 '18 at 9:42









      init_jsinit_js

      996825




      996825
























          1 Answer
          1






          active

          oldest

          votes


















          0














          I've found a cause for this, but it wasn't exactly obvious from the stacktraces what was going on.



          Solution: It turned out to be that the policy in the role assigned to the lambda didn't allow PutObject action on the destination bucket. I've simply added the destination bucket to the list.



          The way I've found this out was to bump boto3's logging verbosity. Just before the copy, I've added the following:



           boto3.set_stream_logger('', logging.DEBUG)


          The first parameter is normally the name of the service, but '' is interpreted as all services. My logging for the lambda was setup to be dumped in cloudwatch logs, so I could inspect them there.



          I've discovered the following errors during CopyObject:



          [DEBUG] 2018-11-20T10:20:37.981Z    4a230839-ecac-11e8-8412-7bf802a1306b    ConnectionError received when sending HTTP request.
          Traceback (most recent call last):
          File "/var/runtime/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 372, in _make_request
          httplib_response = conn.getresponse(buffering=True)
          TypeError: getresponse() got an unexpected keyword argument 'buffering'

          During handling of the above exception, another exception occurred:

          ...repeated...

          During handling of the above exception, another exception occurred:

          Traceback (most recent call last):
          File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
          proxies=self.proxies, timeout=self.timeout)
          File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
          r = adapter.send(request, **kwargs)
          File "/var/runtime/botocore/vendored/requests/adapters.py", line 415, in send
          raise ConnectionError(err, request=request)
          botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))


          It looked as if there was a TypeError causing issues at first, which led me to believe that I might have run into package version incompatibilities between requests and urllib3. But it turns out that this code path is a legacy code path taken by urllib3, and is expected to be taken when other network errors occur (see the buffering issue on github). This was a red herring -- it's normal. The real cause is what triggered that in the first place.



          I've spotted a few more entries before that point in the trace where the connection to the bucket's https endpoint succeeded, but no response could be read. So the connection was closed before requests could even read the status code to the PUT.



          I would have expected to be able to at least read <Code>AccessDenied</Code>, which would have eased diagnosis. Other S3 calls will fail with AccessDenied, I'm not sure why this one is special. Perhaps this is a policy that helps protect public-facing buckets. not sure.



          Anyways, adjusting the permissions fixed it.






          share|improve this answer

























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53390130%2faws-s3-copy-fails-with-remote-end-closed-connection-without-response%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            I've found a cause for this, but it wasn't exactly obvious from the stacktraces what was going on.



            Solution: It turned out to be that the policy in the role assigned to the lambda didn't allow PutObject action on the destination bucket. I've simply added the destination bucket to the list.



            The way I've found this out was to bump boto3's logging verbosity. Just before the copy, I've added the following:



             boto3.set_stream_logger('', logging.DEBUG)


            The first parameter is normally the name of the service, but '' is interpreted as all services. My logging for the lambda was setup to be dumped in cloudwatch logs, so I could inspect them there.



            I've discovered the following errors during CopyObject:



            [DEBUG] 2018-11-20T10:20:37.981Z    4a230839-ecac-11e8-8412-7bf802a1306b    ConnectionError received when sending HTTP request.
            Traceback (most recent call last):
            File "/var/runtime/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 372, in _make_request
            httplib_response = conn.getresponse(buffering=True)
            TypeError: getresponse() got an unexpected keyword argument 'buffering'

            During handling of the above exception, another exception occurred:

            ...repeated...

            During handling of the above exception, another exception occurred:

            Traceback (most recent call last):
            File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
            proxies=self.proxies, timeout=self.timeout)
            File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
            r = adapter.send(request, **kwargs)
            File "/var/runtime/botocore/vendored/requests/adapters.py", line 415, in send
            raise ConnectionError(err, request=request)
            botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))


            It looked as if there was a TypeError causing issues at first, which led me to believe that I might have run into package version incompatibilities between requests and urllib3. But it turns out that this code path is a legacy code path taken by urllib3, and is expected to be taken when other network errors occur (see the buffering issue on github). This was a red herring -- it's normal. The real cause is what triggered that in the first place.



            I've spotted a few more entries before that point in the trace where the connection to the bucket's https endpoint succeeded, but no response could be read. So the connection was closed before requests could even read the status code to the PUT.



            I would have expected to be able to at least read <Code>AccessDenied</Code>, which would have eased diagnosis. Other S3 calls will fail with AccessDenied, I'm not sure why this one is special. Perhaps this is a policy that helps protect public-facing buckets. not sure.



            Anyways, adjusting the permissions fixed it.






            share|improve this answer






























              0














              I've found a cause for this, but it wasn't exactly obvious from the stacktraces what was going on.



              Solution: It turned out to be that the policy in the role assigned to the lambda didn't allow PutObject action on the destination bucket. I've simply added the destination bucket to the list.



              The way I've found this out was to bump boto3's logging verbosity. Just before the copy, I've added the following:



               boto3.set_stream_logger('', logging.DEBUG)


              The first parameter is normally the name of the service, but '' is interpreted as all services. My logging for the lambda was setup to be dumped in cloudwatch logs, so I could inspect them there.



              I've discovered the following errors during CopyObject:



              [DEBUG] 2018-11-20T10:20:37.981Z    4a230839-ecac-11e8-8412-7bf802a1306b    ConnectionError received when sending HTTP request.
              Traceback (most recent call last):
              File "/var/runtime/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 372, in _make_request
              httplib_response = conn.getresponse(buffering=True)
              TypeError: getresponse() got an unexpected keyword argument 'buffering'

              During handling of the above exception, another exception occurred:

              ...repeated...

              During handling of the above exception, another exception occurred:

              Traceback (most recent call last):
              File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
              proxies=self.proxies, timeout=self.timeout)
              File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
              r = adapter.send(request, **kwargs)
              File "/var/runtime/botocore/vendored/requests/adapters.py", line 415, in send
              raise ConnectionError(err, request=request)
              botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))


              It looked as if there was a TypeError causing issues at first, which led me to believe that I might have run into package version incompatibilities between requests and urllib3. But it turns out that this code path is a legacy code path taken by urllib3, and is expected to be taken when other network errors occur (see the buffering issue on github). This was a red herring -- it's normal. The real cause is what triggered that in the first place.



              I've spotted a few more entries before that point in the trace where the connection to the bucket's https endpoint succeeded, but no response could be read. So the connection was closed before requests could even read the status code to the PUT.



              I would have expected to be able to at least read <Code>AccessDenied</Code>, which would have eased diagnosis. Other S3 calls will fail with AccessDenied, I'm not sure why this one is special. Perhaps this is a policy that helps protect public-facing buckets. not sure.



              Anyways, adjusting the permissions fixed it.






              share|improve this answer




























                0












                0








                0







                I've found a cause for this, but it wasn't exactly obvious from the stacktraces what was going on.



                Solution: It turned out to be that the policy in the role assigned to the lambda didn't allow PutObject action on the destination bucket. I've simply added the destination bucket to the list.



                The way I've found this out was to bump boto3's logging verbosity. Just before the copy, I've added the following:



                 boto3.set_stream_logger('', logging.DEBUG)


                The first parameter is normally the name of the service, but '' is interpreted as all services. My logging for the lambda was setup to be dumped in cloudwatch logs, so I could inspect them there.



                I've discovered the following errors during CopyObject:



                [DEBUG] 2018-11-20T10:20:37.981Z    4a230839-ecac-11e8-8412-7bf802a1306b    ConnectionError received when sending HTTP request.
                Traceback (most recent call last):
                File "/var/runtime/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 372, in _make_request
                httplib_response = conn.getresponse(buffering=True)
                TypeError: getresponse() got an unexpected keyword argument 'buffering'

                During handling of the above exception, another exception occurred:

                ...repeated...

                During handling of the above exception, another exception occurred:

                Traceback (most recent call last):
                File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
                proxies=self.proxies, timeout=self.timeout)
                File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
                r = adapter.send(request, **kwargs)
                File "/var/runtime/botocore/vendored/requests/adapters.py", line 415, in send
                raise ConnectionError(err, request=request)
                botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))


                It looked as if there was a TypeError causing issues at first, which led me to believe that I might have run into package version incompatibilities between requests and urllib3. But it turns out that this code path is a legacy code path taken by urllib3, and is expected to be taken when other network errors occur (see the buffering issue on github). This was a red herring -- it's normal. The real cause is what triggered that in the first place.



                I've spotted a few more entries before that point in the trace where the connection to the bucket's https endpoint succeeded, but no response could be read. So the connection was closed before requests could even read the status code to the PUT.



                I would have expected to be able to at least read <Code>AccessDenied</Code>, which would have eased diagnosis. Other S3 calls will fail with AccessDenied, I'm not sure why this one is special. Perhaps this is a policy that helps protect public-facing buckets. not sure.



                Anyways, adjusting the permissions fixed it.






                share|improve this answer















                I've found a cause for this, but it wasn't exactly obvious from the stacktraces what was going on.



                Solution: It turned out to be that the policy in the role assigned to the lambda didn't allow PutObject action on the destination bucket. I've simply added the destination bucket to the list.



                The way I've found this out was to bump boto3's logging verbosity. Just before the copy, I've added the following:



                 boto3.set_stream_logger('', logging.DEBUG)


                The first parameter is normally the name of the service, but '' is interpreted as all services. My logging for the lambda was setup to be dumped in cloudwatch logs, so I could inspect them there.



                I've discovered the following errors during CopyObject:



                [DEBUG] 2018-11-20T10:20:37.981Z    4a230839-ecac-11e8-8412-7bf802a1306b    ConnectionError received when sending HTTP request.
                Traceback (most recent call last):
                File "/var/runtime/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 372, in _make_request
                httplib_response = conn.getresponse(buffering=True)
                TypeError: getresponse() got an unexpected keyword argument 'buffering'

                During handling of the above exception, another exception occurred:

                ...repeated...

                During handling of the above exception, another exception occurred:

                Traceback (most recent call last):
                File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
                proxies=self.proxies, timeout=self.timeout)
                File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
                r = adapter.send(request, **kwargs)
                File "/var/runtime/botocore/vendored/requests/adapters.py", line 415, in send
                raise ConnectionError(err, request=request)
                botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))


                It looked as if there was a TypeError causing issues at first, which led me to believe that I might have run into package version incompatibilities between requests and urllib3. But it turns out that this code path is a legacy code path taken by urllib3, and is expected to be taken when other network errors occur (see the buffering issue on github). This was a red herring -- it's normal. The real cause is what triggered that in the first place.



                I've spotted a few more entries before that point in the trace where the connection to the bucket's https endpoint succeeded, but no response could be read. So the connection was closed before requests could even read the status code to the PUT.



                I would have expected to be able to at least read <Code>AccessDenied</Code>, which would have eased diagnosis. Other S3 calls will fail with AccessDenied, I'm not sure why this one is special. Perhaps this is a policy that helps protect public-facing buckets. not sure.



                Anyways, adjusting the permissions fixed it.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Nov 20 '18 at 19:43

























                answered Nov 20 '18 at 11:59









                init_jsinit_js

                996825




                996825
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53390130%2faws-s3-copy-fails-with-remote-end-closed-connection-without-response%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

                    Is anime1.com a legal site for watching anime?

                    ComboBox Display Member on multiple fields