Guide – How to: setup
Latest update: 2020-12-21
This guide illustrates prerequisites and environment setup for ESP. For a quicker and more portable setup, we also provide Docker images for ESP as described in the Docker section of this guide.
Table of Contents
- Software packets
- CAD tools
- Environment variables
- ESP repository
- Software toolchain
- Patching Ariane for Xcelium Simulator
- Docker
Software packets
The ESP design flow has been developed and tested on the CentOS 7 Linux distribution, which is our preferred OS. Alternatively, you can use Ubuntu 18.04 or Red Hat Enterprise Linux 7.8.
Note: Cadence tools do not officially support Ubuntu 18.04. If you choose to work with Ubuntu, the accelerators unit test simulation within the Cadence Stratus HLS environment may not function properly.
In order to support embedded scripts and CAD tools for simulation and implementation, the following packages should be installed using the system packet manager.
Centos 7
# Miscellaneous
sudo yum install -y git octave octave-io jq
# Python
sudo yum install -y python python-pip python3 python3-pip python3-tkinter
sudo pip3 install Pmw
# Perl
sudo yum install -y perl perl-Env perl-YAML perl-XML-Simple
sudo yum install -y perl-ExtUtils-MakeMaker perl-Thread-Queue perl-Capture-Tiny
# CAD tools and SW toolchains dependencies
sudo yum install -y xterm
sudo yum install -y csh ksh zsh tcl
sudo yum install -y glibc-devel glibc-devel.i686
sudo yum install -y glibc-static glibc-static.i686
sudo yum install -y mesa-libGL.i686 mesa-libGLU.i686
sudo yum install -y mesa-libGL mesa-libGLU
sudo yum install -y mesa-dri-drivers mesa-dri-drivers.i686
sudo yum install -y readline-devel readline-devel.i686
sudo yum install -y libXp libXp.i686
sudo yum install -y openmotif
sudo yum install -y ncurses
sudo yum install -y gdbm-devel gdbm-devel.i686
sudo yum install -y libSM libSM.i686
sudo yum install -y libXcursor libXcursor.i686
sudo yum install -y libXft libXft.i686
sudo yum install -y libXrandr libXrandr.i686
sudo yum install -y libXScrnSaver libXScrnSaver.i686
sudo yum install -y libmpc-devel libmpc-devel.i686
sudo yum install -y nspr nspr.i686
sudo yum install -y nspr-devel nspr-devel.i686
sudo yum install -y tk tk-devel
sudo yum install -y dtc bison flex bzip2 patch bc
sudo yum install -y Xvfb
sudo ln -s /lib64/libtiff.so.5 /lib64/libtiff.so.3
sudo ln -s /usr/lib64/libmpc.so.3 /usr/lib64/libmpc.so.2
# For older GUIs (e.g. Stratus 18.2)
sudo yum install -y libpng12 libpng12.i686
# QT
sudo yum install -y qtcreator
sudo ln -s /usr/bin/qmake-qt5 /usr/bin/qmake
Ubuntu 18.04
# Miscellaneous
sudo apt install -y git octave octave-io jq
# Python
sudo apt install -y python python-pip python3 python3-pip python3-tk
sudo pip3 install Pmw # sudo may not be needed
# Perl
sudo apt install -y perl libyaml-perl libxml-perl
# CAD tools and SW toolchains dependencies
sudo apt install -y xterm
sudo apt install -y csh ksh zsh tcl
sudo apt install -y build-essential
sudo apt install -y libgl1-mesa-dev libglu1-mesa libgl1-mesa-dri
sudo apt install -y libreadline-dev
sudo apt install -y libxpm-dev
sudo apt install -y libmotif-dev
sudo apt install -y libncurses5
sudo apt install -y libncurses-dev
sudo apt install -y libgdbm-dev
sudo apt install -y libsm-dev
sudo apt install -y libxcursor-dev
sudo apt install -y libxft-dev
sudo apt install -y libxrandr-dev
sudo apt install -y libxss-dev
sudo apt install -y libmpc-dev
sudo apt install -y libnspr4
sudo apt install -y libnspr4-dev
sudo apt install -y tk tk-dev
sudo apt install -y flex
sudo apt install -y rename
sudo apt install -y zlib1g:i386
sudo apt install -y gcc-multilib
sudo apt install -y device-tree-compiler
sudo apt install -y bison
sudo apt install -y xvfb
# For older GUIs (e.g. Stratus 18.2)
echo 'deb http://security.ubuntu.com/ubuntu xenial-security main' | sudo tee -a /etc/apt/sources.list
sudo apt install -y libpng12-0
# QT
sudo apt install -y qtcreator
Red Hat Enterprise Linux 7.8
# Miscellaneous
sudo yum install -y git octave octave-io jq
# Python
sudo yum install -y python python-pip python3 python3-pip python34-tkinter
sudo pip3 install Pmw
# Perl
sudo yum install -y perl perl-YAML perl-ExtUtils-MakeMaker
sudo yum install -y http://mirror.centos.org/centos/7/os/x86_64/Packages/perl-XML-Simple-2.20-5.el7.noarch.rpm
# CAD tools and SW toolchains dependencies
sudo yum install -y xterm
sudo yum install -y csh ksh zsh tcl
sudo yum install -y glibc-devel glibc-devel.i686
sudo yum install -y glibc-static glibc-static.i686
sudo yum install -y mesa-libGL.i686 mesa-libGLU.i686
sudo yum install -y mesa-libGL mesa-libGLU
sudo yum install -y mesa-dri-drivers mesa-dri-drivers.i686
sudo yum install -y readline-devel readline-devel.i686
sudo yum install -y libXp libXp.i686
sudo yum install -y openmotif
sudo yum install -y ncurses
sudo yum install -y gdbm-devel gdbm-devel.i686
sudo yum install -y libSM libSM.i686
sudo yum install -y libXcursor libXcursor.i686
sudo yum install -y libXft libXft.i686
sudo yum install -y libXrandr libXrandr.i686
sudo yum install -y libXScrnSaver libXScrnSaver.i686
sudo yum install -y libmpc-devel
sudo yum install -y http://mirror.centos.org/centos/7/os/x86_64/Packages/libmpc-1.0.1-3.el7.i686.rpm
sudo yum install -y nspr nspr.i686
sudo yum install -y nspr-devel nspr-devel.i686
sudo yum install -y tk tk-devel
sudo yum install -y Xvfb
sudo yum install -y http://mirror.centos.org/centos/7/extras/x86_64/Packages/dtc-1.4.6-1.el7.x86_64.rpm
sudo ln -s /lib64/libtiff.so.5 /lib64/libtiff.so.3
sudo ln -s /usr/lib64/libmpc.so.3 /usr/lib64/libmpc.so.2
sudo yum install -y bison
# For older GUIs (e.g. Stratus 18.2)
sudo yum install -y libpng12 libpng12.i686
# QT
sudo yum install -y qtcreator
sudo ln -s /usr/bin/qmake-qt5 /usr/bin/qmake
CAD tools
ESP leverages a mix of open-source tools and scripts, as well as commercial tools. The following list specifies which commercial tools are currently supported and which tools are required to complete some steps of the ESP design methodology. Support for the tools marked as beta in the following list has not been fully tested or completed.
Required
- RTL simulation: requires one of the following tools (64-bit version)
- Mentor Graphics
ModelSim SE 2019.2
: RTL system-level simulator - Cadence
Incisive 15.2
: RTL system-level simulator (LEON3 only) - Cadence
Xcelium 18.03
: RTL system-level simulator (apply patch below)
- Mentor Graphics
- FPGA prototyping: requires the following tool
- Xilinx
Vivado 2019.2
: logic synthesis and implementation for FPGA
- Xilinx
Note: As of May 18 2020 we updated the ESP synthesis and simulation scripts to work with Vivado 2019.2 and Modelsim 2019.2. This update improves simulation speed and enables us to add support for the newest Xilinx development boards. If you wish to continue working with the older versions of the tools (Vivado 2018.2 and Modelsim 10.6c or 10.7a), you may keep doing so and still get the latest updates from the ESP repository: after you pull the latest changes, simply revert the updates to the ESP scripts:
git revert 44b1753
.
Optional
- Accelerator design: the following tools are optional
- Cadence
Stratus HLS 18.20
: high-level synthesis of accelerators from SystemC - Xilinx
Vivado HLS 2019.2
: high-level synthesis of accelerators from C/C++ (beta) - Mentor
Catapult HLS 10.5b
: high-level synthesis of accelerators from SystemC and C/C++- Xilinx
Vivado 2018.3
: requirement of Catapult HLS
- Xilinx
- Cadence
- Cache hierarchy: the default ESP configuration selects an RTL
implementation of the cache hierarchy (no additional tool required).
This supports both multi-core execution and coherence for accelerators.
Optionally, ESP provides a SystemC implementation of the caches that can be
synthesized with Cadence Stratus HLS. The SystemC version of the caches is
particularly helpful to conduct architectural research on coherence, as the
SystemC model is significantly easier to modify than the RTL implementation.
To synthesize this version of the ESP caches, the following tool is required:
- Cadence
Stratus HLS 18.15
: high-level synthesis of caches from SystemC
Note: we will soon make available an instance of the RTL for the caches synthesized with Cadence Stratus HLS (pending approval for release).
- Cadence
Environment variables
Here are the environment variables required by the CAD tools:
# Cadence: Stratus HLS, Incisive, Xcelium
# e.g. <stratus_path> = /opt/cadence/stratus182
# e.g. <incisive_path> = /opt/cadence/incisive152
# e.g. <xcelium_path> = /opt/cadence/xcelium18
export LM_LICENSE_FILE=$LM_LICENSE_FILE:<cadence_license_path>
export PATH=$PATH:<stratus_path>/bin:<incisive_path>/tools/cdsgcc/gcc/bin:<xcelium_path>/tools/cdsgcc/gcc/bin
export CDS_AUTO_64BIT=all
export HOST=$(hostname) # for Ubuntu only
# Xilinx: Vivado, Vivado HLS
# e.g. <vivado_path> = /opt/xilinx/Vivado/2018.2
export XILINXD_LICENSE_FILE=<xilinx_license_path>
source <vivado_path>/settings64.sh
# Mentor: Catapult HLS, Modelsim
# e.g. <modelsim_path> = /opt/mentor/modeltech
# e.g. <catapult_path> = /opt/mentor/catapult
export LM_LICENSE_FILE=$LM_LICENSE_FILE:<mentor_license_path>
export PATH=$PATH:<modelsim_path>/bin
export AMS_MODEL_TECH=<modelsim_path>
export PATH=$PATH:<catapult_path>/Mgc_home/bin
export SYSTEMC=<catapult_path>/Mgc_home/shared
export SYSTEMC_HOME=<catapult_path>/Mgc_home/shared
export MGC_HOME=<catapult_path>/Mgc_home
export LIBDIR=-L<catapult_path>/Mgc_home/shared/lib $LIBDIR
# RISC-V (for Ariane and Ibex)
# e.g. <riscv_path> = /opt/riscv
# e.g. <riscv32imc_path> = /opt/riscv32imc
export RISCV=<riscv_path>
export RISCV32IMC=<riscv32imc_path>
export PATH=$PATH:<riscv_path>/bin:<riscv32imc_path>/bin
# Leon3
# e.g. <leon3_path> = /opt/leon
export PATH=$PATH:<leon3_path>/bin
export PATH=$PATH:<leon3_path>/mklinuximg
export PATH=$PATH:<leon3_path>/sparc-elf/bin
ESP repository
ESP is available as an open-source GitHub
repository. All original files in
ESP are governed by the Apache 2.0 License, whereas third-party open-source
software maintains its original license and includes the copyright notice from
the authors.
Use the following command to obtain the ESP source code from GitHub
. The
option recursive will initialize and download the source code from submodules
that are linked by the main ESP repository. Please be patient, as one of the
submodules is Linux, which embeds a very large history.
# Clone ESP and all Git submodules at once
git clone --recursive https://github.com/sld-columbia/esp.git
# Alternatively
git clone https://github.com/sld-columbia/esp.git
git submodule update --init --recursive
Software toolchain
Every ESP SoC must include at least one processor core to execute the operating system and the target applications. ESP currently supports the Leon3 core from GRLIB, implementing the 32-bit SPARC V8 instruction-set architecture (ISA) and the Ariane core from ETH Zurich, implementing the 64-bit version of the RISC-V ISA.
Since the target ISA is not x86, a cross compiler is required to build software that can execute on the target processor. In addition, Linux requires a root filesystem that hosts all necessary initialization scripts, header files and dynamically-linked libraries. A partial overlay for the root filesystem is embedded into the ESP code base, however, binaries from busybox and libraries must be compiled and combined to the overlay. Failure to complete this step will prevent Linux from completing the boot process.
ESP provides scripts that allow users to build the toolchain automatically.
cd <esp>
# Leon3 toolchain
./utils/toolchain/build_leon3_toolchain.sh
# RISC-V 64-bit toolchain (for the Ariane processor)
./utils/toolchain/build_riscv_toolchain.sh
# RISC-V 32-bit toolchain (for the Ibex processor)
./utils/toolchain/build_riscv32imc_toolchain.sh
The scripts go through several interactive steps to build the toolchain. If
one step fails due to missing packages on the host system, users can restart the
script and skip phases that completed successfully. For a default installation
the user can select all the default answers of the interactive script.
Patching Ariane for Xcelium Simulator
The source code of Ariane includes UVM and SVA features that can only work with
Modelsim. These features are disabled by the ESP Makefile by defining the
variable VERILATOR
during compilation. As a result, the instruction tracer is
not instantiated, because it requires UVM support. The developers of Ariane
replaced the instruction tracer with custom RTL code when simulating with
Verilator. This code, unfortunately, triggers non suppressible errors in the
Xcelium compiler.
While we do recommend to use Modelsim for simulation and keep all of the UVM and
SVA features enabled, if you need to simulate using Xcelium, please apply this
patch to the Ariane repository in
third-party/ariane
The patch is simply commenting the code for instruction tracing with Verilator.
Finally, please note that Incisive cannot compile the source code of Ariane, but it will work with Leon3-based ESP instances.
Docker
The ESP Docker images are available on Dockerhub: hub.docker.com/repository/docker/davidegiri/esp.
The repository containing the Dockerfiles for generating the ESP Docker images is available on Github: https://github.com/sld-columbia/esp-docker.
The ESP Docker images are based on a CentOS 7 Docker image and they contain:
-
the installation of all the software packets required by ESP;
-
the installation of utilities like vim, emacs, tmux, socat and minicom, useful when working with ESP;
-
an environment variables setup script to be customized with the correct CAD tools paths and licenses;
-
the ESP repository and all its submodules;
-
the installation of the software toolchains for RISC-V and Leon3.
The are two types of images, which are identified by the full
and
small
strings in the tag. The complete images are labeled with
full
and they are over 5GB in size in the compressed format. We also
offer smaller images labeled with the small
string that are slightly
above 1GB in size, because they do not include the installation of the
RISC-V and Leon3 software toolchains.
We have tested the ESP Docker images on Windows 10, MacOS 10.15, CentOS 7, Ubuntu 18.04 and RedHat 7.8. However, they should work on other OS distributions as well.
Install Docker
If you haven’t already, install Docker by following the instructions for your OS: https://docs.docker.com/engine/install/.
To start the Docker daemon on Linux systems you need to run:
sudo systemctl start docker
On Linux systems you will need to use sudo
to run all docker
commands (sudo docker
) unless you add your user to the docker
group. To add your user to the docker
group run the following:
sudo usermod -aG docker ${USER}
Then you need to logout and log back in and after that you can verify
you are part of the docker
group by running groups
.
Download the Docker image
Download the Docker image by running the following command (on Windows
10 you should run it in the PowerShell). Replace <tag>
with
centos7-full
for the full image and with centos7-small
for the
small image.
docker pull davidegiri/esp:<tag>
You may need to add login credentials to be able to pull. In that case you can add your credentials by running the following. A prompt will ask for your password as well.
docker login -u <your-dockerhub-username>
Start the Docker container
Linux
On Linux you can start a Docker container in interactive mode as
follows. The security-opt
, network
, env
and volume
arguments
are needed to connect the container to the host machine display, so
that it’s possible to open graphical interfaces from inside the
container. Replace <tag>
with centos7-full
for the full image and
with centos7-small
for the small image.
docker run -it --security-opt label=type:container_runtime_t --network=host -e DISPLAY=$DISPLAY -v "$HOME/.Xauthority:/root/.Xauthority:rw" davidegiri/esp:<tag> /bin/bash
MacOS
On MacOS you need to make sure that the X server is running and is configured properly to be able to open graphical interfaces from inside the Docker container:
-
Install XQuartz
-
Open a terminal and launch the application with
open -a xQuartz
. -
From the XQuartz drop-down menu, open
Preferences..
and select theSecurity
tab. Make sure that the check box “Allow connections from network clients” is enabled, as shown in the picture below.
-
From the same drop-down menu, quit XQuartz. The application must exit completely, then you should restart XQuartz.
-
From a new terminal window, enable X forwarding for your current IP address:
ip=$(ifconfig en0 | grep inet | awk '$1=="inet" {print $2}') xhost + $ip
Finally, you can run the container as follows. Replace <tag>
with
centos7-full
for the full image and with centos7-small
for the
small image.
docker run -it --network=host -e DISPLAY=$ip:0 -v /tmp/.X11-unix:/tmp/.X11-unix davidegiri/esp:<tag> /bin/bash
Windows 10
On Windows 10 you can start a Docker container in interactive mode from the Docker dashboard GUI by clicking first on the RUN button next to the Docker image name and then on the CLI button next to the running container.
To be able to launch graphical interfaces from inside the container on Windows 10 you need a couple of extra steps (adapted from these guides: guide1, guide2):
-
Install VcXsrv Windows X Server and after that launch XLaunch. In the XLaunch configuration steps use all the default configurations apart from selecting the “Disable access control” option, which is not selected by default.
-
Find the IP of you Windows machine by running
ipconfig
in the PowerShell. Look for theIPv4 Address
entry. -
Run
export DISPLAY=<host-ip-address>:0.0
in the Docker container that you previously started with the CLI button.
Test X forwarding
From inside the container you can test the connection to the host display by
running xeyes
or xclock
.
CAD tools with Docker
The CAD tools section has the complete list of CAD tools required by ESP. The Docker image doesn’t contain any of those CAD tools. Hence, next we describe a few options to work with CAD tools in the Docker container.
Install the CAD tools on your host machine from inside the container
Installing the CAD tools from inside the Docker container is convenient because all required packages are already installed. Moreover, this strategy is especially useful if your host machine doesn’t run CentOS 7, which is the OS running in the Docker container. However, instead of installing the CAD tools inside the container, it’s preferable to install them on the host machine, so that they can be used also by other containers or even natively on the host machine in some case. In addition, it’s better to avoid committing to the Docker image the large CAD tools installation folders.
Installing the CAD tools on the host machine from inside the container can be done by using volumes or bind mounts, which are the two main ways for persisting data generated by and/or used by Docker containers. Use bind mounts if you want to specify the absolute path on the host machine where the CAD tools should be. Use volumes if you want the data to live inside Docker’s storage directory, which is managed by Docker (this is preferable on Windows and MacOS).
You can declare multiple volumes and bind mounts when you launch the container
with docker run
by adding the following arguments.
-
Volume:
-v <volume-name>:/cad-tools-path/inside/container
. The volume called<volume-name>
lives inside the Docker’s storage directory and it will be accessible from inside the container at the path/cad-tools-path/inside/container
. -
Bind mount:
-v /cad-tools-path/on/host:/cad-tools-path/inside/container
. The file or folder/cad-tools-path/on/host
will be accessible from inside the container at the path/cad-tools-path/inside/container
.
The idea is that from inside the container you can install the CAD tools at the path(s) of the volumes or bind mounts that you defined. Once the tools are installed, every container can access them if it receives the proper volume or bind mount arguments.
Use CAD tools already installed on your host machine
This option is useful if you already have some of the CAD tools
installed on your host machine and if the host machine OS matches the
one of the Docker container. In that case you can simply give access
to the CAD tools with a bind mount when you start a container, by
passing the -v
argument as decribed in the previous section: -v
/cad-tools-path/on/host:/cad-tools-path/inside/container
.
Install the CAD tools inside the container
If you want a fully self-contained container, you can install the tools directly inside the container, without defining any volumes or bind mounts. The issue with this solution is that the container size will increase considerably, making it less portable.
Environment variables with Docker
The ESP Docker image provides two scripts for setting the required environment variables, as specified in the environment variables section.
Source the esp_env.sh
script if you don’t need to use any CAD tools.
cd /home/espuser
source esp_env.sh
If you need some CAD tools you can use the esp_env_cad.sh
instead. You should customize the script by inserting the paths of
your tools and licences and by commenting out the environment variables for
the tools that you don’t have. Then you should source the script.
cd /home/espuser
source esp_env_cad.sh
Useful Docker commands
Exit a container:
# run from inside the container
exit
List all local containers and their IDs:
docker ps -a
Stop a container:
docker stop <container-ID>
Start a container:
docker start <container-ID>
Attach to a running container:
docker attach <container-ID>
Copy data from host machine to a container:
docker cp <path-on-host> <container-ID>:/<path-inside-container>
Delete a container:
docker rm -f <container-ID>
List all local images:
docker images
Delete an image:
docker rmi <image-name>
Here is the complete Docker documentation.