MitchellLab-Handbook
  • Welcome
  • Lab overview
  • Lab Policies
    • Code of conduct
    • Communication
    • Day to day expectations
    • Work and Life
    • Diversity
    • New starters
    • Lab leavers
    • Recurring events
    • Internal resources
    • External resources
    • Essential reading list
    • Record Keeping
    • Publications
    • Conferences
    • How to get help
    • Wellbeing
    • Training
    • Data management
    • Logistics
    • Appraisals
    • Leaving
    • Writing Feedback
  • Lab work
    • Lab rules
    • Lab resources
  • Computational Work
    • Coding handbook
    • Connecting to our hardware
    • Coding resources
    • Running the multiscale model
    • Teaching environment
  • Modelling Tutorials
    • Modelling Tutorial 1
    • Modelling Tutorial 2
    • Other Modelling Resources
Powered by GitBook
On this page
  • Pulling the latest image from Docker
  • Step-by-step guide to building the multiscale model from the docker image

Was this helpful?

  1. Computational Work

Running the multiscale model

Step-by-step guide for running the multiscale model

PreviousCoding resourcesNextTeaching environment

Last updated 1 year ago

Was this helpful?

Step-by-step guide to building the multiscale model from the docker imageImportant Notes:

The model should only be run on one of our computational servers as it will not perform well on a laptop. You may be able to run it locally on your laptop if you specify a small number of cells and a small simulation time, otherwise this is best done on the server.

All work Done in a docker container is lost when the Docker container is closed/killed. Make carefully note of the output folder info below and save any data, code, revised versions of the notebooks to this folder before closing the Docker container and they will be available afterwards.

Pulling the latest image from Docker

The quickest way to run the multiscale model is to pull the latest docker image and run it. The multiscale model is hosted here:

You can pull this image simply by using:

sudo docker pull siftw/multiscalemodel-btncad:main

Now you just make a directory for the outputs

mkdir ~/multiscaleModelOutputs

and finally run the docker command. (there are more details below on things you might need to change, such as ports, your usename etc:

sudo docker run --rm --group-add users -p 10005:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=64 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs/outputs:/multiscaleModel/SSDoutputs -e CHOWN_HOME=yes -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/*,/multiscaleModel/' siftw/multiscalemodel-btncad:main

Now you should be able to go to port 1005 on the server and access the multiscale model through the web

  • Open a web browser to the IP of the server and the port you specified above e.g.

  • You will be asked for the password you specified above e.g. 'letmein'

  • Launch the "runMM_auto_sm-2023.ipynb" notebook.

  • Specify a number of cells, max generations and maximum time for your simulation (start small).

  • run all cells to run the model

Step-by-step guide to building the multiscale model from the docker image

  1. Clone the MitchellLabDev github repo, which includes everything you need to run the multiscale model.

  2. Make an output directory where all outputs will be stored.

    • Create a folder on the server that your user has permission to write to. This should be outside the mitchellLabDev folder, probably in your home directly.

      • mkdir ~/multiscaleModelOutputs

  3. move into the multiscale model directory.

    • cd MitchellLabDev

    • cd dockerMultiscaleModel

    • or cd multiscaleModel-BTNCAD for the latest version with BCR and TLR modules.

  4. Build the docker image from the Dockerfile (note the dot at the end is important and not a typo).

    • sudo docker build -f Dockerfile .

  5. Make a note of the ID of the docker image created from the end of the output from the previous command. It should look like this:

    • ea12c170bccf

  6. Run the Docker container

    • sudo docker run --rm --group-add users -p <PORT>:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=64 -e NB_UID=$(id -u) -e NB_USER=<YOUR USER NAME> -e JUPYTER_TOKEN=<PASSWORD> -v <OUTPUT DIRECTORY>:/multiscaleModel/SSDoutputs -e CHOWN_HOME=yes -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/*,/multiscaleModel/' <DOCKER CONTAINER ID>

    • You need to make 5 substitutions into the above command:

      • Replace <PORT> with an available port on the server. Try 10000 first, if you get a clash with someone else try 10001

      • Replace <YOUR USER NAME> with your linux user's name. This will be whatever appears on your command line before the @ sign e.g. if my command line says simon@simon-HP-Z6-G4-Workstation then my username is simon.

      • Replace <PASSWORD> with a password to control access to your jupyter notebook, this doesn't have to be super secure.

      • Replace <OUTPUT DIRECTORY> with the directory you created above, this should be an absolute path, e.g. ~/outputs.

      • Replace <DOCKER CONTAINER ID> with the id you noted above e.g. 75fbcb07b2c0.

    • A complete command may look like this:

      • sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=64 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -e CHOWN_HOME=yes -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/*,/multiscaleModel/' 75fbcb07b2c0

    • You may want to mount both the NAS and the smaller SSD in your docker image to write big files out to the NAS without filling the SSD. However your working folder should always be on the SSD as otherwise everything will be slow. You can do that like this:

      • sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=32 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -v ~/synology/Simon/multiscaleWorkingFolder:/multiscaleModel/NASoutputs -e CHOWN_HOME=yes -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/*,/multiscaleModel/'8f3838b0802d

      • If you get permission errors related to chown on the NAS you can leave that part of the command out as shown below. It should still work fine:

        • sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=32 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -v ~/synology/Simon/multiscaleWorkingFolder:/multiscaleModel/NASoutputs -e CHOWN_HOME=no -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/SSDoutputs/*' 8f3838b0802d

        • Or if the NAS is mounted at /mnt/Synology: sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=32 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -v mnt/Synology/Simon/multiscaleWorkingFolder:/multiscaleModel/NASoutputs -e CHOWN_HOME=no -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/SSDoutputs/*' 8f3838b0802d

        • to limit your memory and CPU usage in a more fair way use: sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=32 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -v /mnt/Synology/Simon/multiscaleWorkingFolder:/multiscaleModel/NASoutputs -e CHOWN_HOME=no -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --memory="32G" --memory-swap="35G" --cpus="32.0" --user root -e CHOWN_EXTRA='/multiscaleModel/SSDoutputs/*' 35f8f74a7d0a

  7. View the Jupyter notebook in a browser and run the model

    • Open a web browser to the IP of the server and the port you specified above

    • You will be asked for the password you specified above

    • Launch the "runMultiscaleModel.ipynb" notebook.

    • Specify a number of cells, max generations and maximum time for your simulation (start small).

    • run all cells to run the model.

  8. All output will be saved in the output folder you specific above. Nothing else from your Docker container is permanent so if you make changes to the jupyter notebook or generate any output files in any other locations they will be lost as soon as you close the docker container.

  9. Once the model is running you will see:

    Simulation has started, you probably now want to run the plotting code to visualise what is happening.
    • At this point you can open and run plotOutputWhileRunning.ipynb to visualise the results.

    • If you want the results to keep updating while the model is running you should set loop=true, if you want to just generate a single graph of the progress leave loop=false.

    • All the simulated cells are saved as custom Cell structures to the "processedCells" folder in your output directory, named by their cell ID. You can load these as follows:

      using FileIO

      thisCell=load("cell_1.jl","thisCell")

      thisCell contains the following fields:

      generation::Int64
      ID::Int64
      motherCellID::Int64
      founderCellID::Int64
      ODEFile::String
      fate::Int64 #0 = unknown, 1 = division, 2 = death
      tBirth::Float64
      tDeath::Float64
      solutionObject #the entire solution object. If you don't need this comment it out for efficiency.
      y0 #initial conditions.
      yCoord::Float64 #for plotting lineage trees
      leftDaughterID::Int64
      rightDaughterID::Int64

      Which can be accessed using thisCell.generation.

git clone

This will require authentication with git. You should have permission to access this private repo and if not ask Simon for permission. If you get asked to enter your usename and password but it still denies you access you probably need to install the github command line tools. Follow the instructions here:

https://hub.docker.com/repository/docker/siftw/multiscalemodel-btncad/general
http://139.184.169.98:10005
https://github.com/SiFTW/MitchellLabDev.git
https://github.com/cli/cli/blob/trunk/docs/install_linux.md
http://139.184.170.218:10000/tree