
The following technique can reduce the time in general. While this makes use of caching, it needs to be re-executed from scratch every time
#Docker for mac kubernetes mount local volume install#
RUN pip install -r /src/requirements.txt #only re-executed if the file changes single time.ĪDD code/requirements.txt /src/requirements.txt RUN pip install -r /src/code/requirements.txt #this will run every. By adding your dependency definition file first,Īnd installing them before you add your complete code, you’ll be able to skip this expensive step If you install your dependencies after adding your codebase, youĭon’t make use of the caching which Docker provides.

You can apply this, if you have something like a requirements.txt file, which describes third-party You can read more about this approach,Īpplied to building an image for a Python application - you can read more on that here. You could make the dependency-installing step less frequent, by adding the dependencyĭefinition file before you add all your code. Only Install Dependencies If They Actually Change Going to be huge due to all the diff layers, and only use the results by issuing a COPY -from command Result, you could use this technique to do the heavy lifting in an intermediate image, accept that it’s If you’re working with a tar ball of data, need to unpack it to process it, but are only interested in the # finally, /result ends up with the final data Here is an example multi-stage Dockerfile: FROM ubuntu as intermediate If you’re just looking to reduce the final size of your image - take a look at multi-stage builds.


Can You Mount a Volume While Building Your Docker Image to Cache Dependencies?
