Running the multiscale model
Step-by-step guide for running the multiscale model
Last updated
Was this helpful?
Step-by-step guide for running the multiscale model
Last updated
Was this helpful?
The model should only be run on one of our computational servers as it will not perform well on a laptop. You may be able to run it locally on your laptop if you specify a small number of cells and a small simulation time, otherwise this is best done on the server.
All work Done in a docker container is lost when the Docker container is closed/killed. Make carefully note of the output folder info below and save any data, code, revised versions of the notebooks to this folder before closing the Docker container and they will be available afterwards.
The quickest way to run the multiscale model is to pull the latest docker image and run it. The multiscale model is hosted here:
You can pull this image simply by using:
sudo docker pull siftw/multiscalemodel-btncad:main
Now you just make a directory for the outputs
mkdir ~/multiscaleModelOutputs
and finally run the docker command. (there are more details below on things you might need to change, such as ports, your usename etc:
sudo docker run --rm --group-add users -p 10005:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=64 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs/outputs:/multiscaleModel/SSDoutputs -e CHOWN_HOME=yes -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/*,/multiscaleModel/' siftw/multiscalemodel-btncad:main
Now you should be able to go to port 1005 on the server and access the multiscale model through the web
Open a web browser to the IP of the server and the port you specified above e.g.
You will be asked for the password you specified above e.g. 'letmein'
Launch the "runMM_auto_sm-2023.ipynb" notebook.
Specify a number of cells, max generations and maximum time for your simulation (start small).
run all cells to run the model
Clone the MitchellLabDev github repo, which includes everything you need to run the multiscale model.
Make an output directory where all outputs will be stored.
Create a folder on the server that your user has permission to write to. This should be outside the mitchellLabDev folder, probably in your home directly.
mkdir ~/multiscaleModelOutputs
move into the multiscale model directory.
cd MitchellLabDev
cd dockerMultiscaleModel
or cd multiscaleModel-BTNCAD
for the latest version with BCR and TLR modules.
Build the docker image from the Dockerfile (note the dot at the end is important and not a typo).
sudo docker build -f Dockerfile .
Make a note of the ID of the docker image created from the end of the output from the previous command. It should look like this:
ea12c170bccf
Run the Docker container
sudo docker run --rm --group-add users -p <PORT>:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=64 -e NB_UID=$(id -u) -e NB_USER=<YOUR USER NAME> -e JUPYTER_TOKEN=<PASSWORD> -v <OUTPUT DIRECTORY>:/multiscaleModel/SSDoutputs -e CHOWN_HOME=yes -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/*,/multiscaleModel/' <DOCKER CONTAINER ID>
You need to make 5 substitutions into the above command:
Replace <PORT> with an available port on the server. Try 10000 first, if you get a clash with someone else try 10001
Replace <YOUR USER NAME> with your linux user's name. This will be whatever appears on your command line before the @ sign e.g. if my command line says simon@simon-HP-Z6-G4-Workstation
then my username is simon
.
Replace <PASSWORD> with a password to control access to your jupyter notebook, this doesn't have to be super secure.
Replace <OUTPUT DIRECTORY> with the directory you created above, this should be an absolute path, e.g. ~/outputs
.
Replace <DOCKER CONTAINER ID> with the id you noted above e.g. 75fbcb07b2c0
.
A complete command may look like this:
sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=64 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -e CHOWN_HOME=yes -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/*,/multiscaleModel/' 75fbcb07b2c0
You may want to mount both the NAS and the smaller SSD in your docker image to write big files out to the NAS without filling the SSD. However your working folder should always be on the SSD as otherwise everything will be slow. You can do that like this:
sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=32 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -v ~/synology/Simon/multiscaleWorkingFolder:/multiscaleModel/NASoutputs -e CHOWN_HOME=yes -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/*,/multiscaleModel/'8f3838b0802d
If you get permission errors related to chown on the NAS you can leave that part of the command out as shown below. It should still work fine:
sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=32 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -v ~/synology/Simon/multiscaleWorkingFolder:/multiscaleModel/NASoutputs -e CHOWN_HOME=no -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/SSDoutputs/*' 8f3838b0802d
Or if the NAS is mounted at /mnt/Synology:
sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=32 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -v mnt/Synology/Simon/multiscaleWorkingFolder:/multiscaleModel/NASoutputs -e CHOWN_HOME=no -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --user root -e CHOWN_EXTRA='/multiscaleModel/SSDoutputs/*' 8f3838b0802d
to limit your memory and CPU usage in a more fair way use:
sudo docker run --rm --group-add users -p 10002:8888 -e JUPYTER_ENABLE_LAB=yes -e JULIA_NUM_THREADS=32 -e NB_UID=$(id -u) -e NB_USER=simon -e JUPYTER_TOKEN=letmein -v ~/multiscaleModelOutputs:/multiscaleModel/SSDoutputs -v /mnt/Synology/Simon/multiscaleWorkingFolder:/multiscaleModel/NASoutputs -e CHOWN_HOME=no -e CHOWN_EXTRA_OPTS='-R' -w /multiscaleModel/ --memory="32G" --memory-swap="35G" --cpus="32.0" --user root -e CHOWN_EXTRA='/multiscaleModel/SSDoutputs/*' 35f8f74a7d0a
View the Jupyter notebook in a browser and run the model
Open a web browser to the IP of the server and the port you specified above
You will be asked for the password you specified above
Launch the "runMultiscaleModel.ipynb" notebook.
Specify a number of cells, max generations and maximum time for your simulation (start small).
run all cells to run the model.
All output will be saved in the output folder you specific above. Nothing else from your Docker container is permanent so if you make changes to the jupyter notebook or generate any output files in any other locations they will be lost as soon as you close the docker container.
Once the model is running you will see:
At this point you can open and run plotOutputWhileRunning.ipynb to visualise the results.
If you want the results to keep updating while the model is running you should set loop=true, if you want to just generate a single graph of the progress leave loop=false.
All the simulated cells are saved as custom Cell structures to the "processedCells" folder in your output directory, named by their cell ID. You can load these as follows:
using FileIO
thisCell=load("cell_1.jl","thisCell")
thisCell contains the following fields:
Which can be accessed using thisCell.generation
.
git clone
This will require authentication with git. You should have permission to access this private repo and if not ask Simon for permission. If you get asked to enter your usename and password but it still denies you access you probably need to install the github command line tools. Follow the instructions here: