Difference between revisions of "Using IceCC"

From Openembedded.org
Jump to: navigation, search
(Undo revision 3286 by Yjytalamago (Talk))
(Icecc and external toolchain)
(6 intermediate revisions by 3 users not shown)
Line 3: Line 3:
  
 
* Put ''INHERIT += "icecc"'' in your local.conf
 
* Put ''INHERIT += "icecc"'' in your local.conf
* copy the customized [http://bugs.openembedded.org/attachment.cgi?id=1032&action=view create-icecc-env.sh] script to /path/to/your/oe/root/tmp/ and make it executable.
+
* set scheduler URL in your customized create-icecc-env.sh script + set ''ICECC_ENV_EXEC = /path/to/your/oe/root/tmp/create-icecc-env.sh'' in your local.conf or use icecc service provided by your distribution (e.g. in ubuntu set ICECC_SCHEDULER_HOST in /etc/icecc/icecc.conf and enable START_ICECC in /etc/default/icecc)
* set ''ICECC_ENV_EXEC = /path/to/your/oe/root/tmp/create-icecc-env.sh'' in your local.conf
 
 
* set ''ICECC_PATH = /usr/bin/icecc'' in your local.conf (be sure this matches the output of 'which icecc')
 
* set ''ICECC_PATH = /usr/bin/icecc'' in your local.conf (be sure this matches the output of 'which icecc')
  
Line 63: Line 62:
 
  PARALLEL_MAKE = "-j 10"
 
  PARALLEL_MAKE = "-j 10"
 
  ICECC_PATH = "/usr/bin/icecc"
 
  ICECC_PATH = "/usr/bin/icecc"
  ICECC_ENV_EXEC = "/proj/oplinux-0.2/op-linux/branches/oplinux-0.2/tmp/icecc-create-env"
+
  #ICECC_ENV_EXEC = "/proj/oplinux-0.2/op-linux/branches/oplinux-0.2/tmp/icecc-create-env"
 
  ICECC_USER_CLASS_BL = " native"
 
  ICECC_USER_CLASS_BL = " native"
 
  INHERIT += "icecc"
 
  INHERIT += "icecc"
  
 +
=== Icecc and sstate interaction ===
  
 +
Inheriting icecc will change checksums for all task, so you cannot reuse sstate from some remote SSTATE_MIRROR if you're inheriting with icecc and builder which is populating SSTATE_MIRROR isn't.
 +
 +
In order to reuse sstate archives you need to inherit icecc on all builders. For builders where icecc doesn't make sense (e.g. laptop when traveling), you can disable it with ICECC_DISBLED, that will disable icecc functionality while keepipng the same checksums.
  
 
=== Successes and Failures ===
 
=== Successes and Failures ===

Revision as of 09:34, 11 June 2013

IceCC and OE

It is possible to compile on a cluster of machines with OE, and quite easily so. This code is still somewhat experimental, but should rapidly stabilize thanks to the work that Ifaistos has put into it recently and previous work from likewise and zecke (I hope I did not forget anyone). You need to take the following steps to prepare for compilation with icecc. How to set up icecc itself is beyond this intro.

  • Put INHERIT += "icecc" in your local.conf
  • set scheduler URL in your customized create-icecc-env.sh script + set ICECC_ENV_EXEC = /path/to/your/oe/root/tmp/create-icecc-env.sh in your local.conf or use icecc service provided by your distribution (e.g. in ubuntu set ICECC_SCHEDULER_HOST in /etc/icecc/icecc.conf and enable START_ICECC in /etc/default/icecc)
  • set ICECC_PATH = /usr/bin/icecc in your local.conf (be sure this matches the output of 'which icecc')

User now can specify if certain packages or packages belonging to class should not use icecc to distribute compile jobs to remote machines, but handled localy, by defining ICECC_USER_CLASS_BL and ICECC_USER_PACKAGE_BL with the appropriate values in local.conf


ICECC config

#
# Nice level of running compilers
#
# ICECC_NICE_LEVEL="5"
ICECC_NICE_LEVEL="5"
#
# icecc daemon log file
#
# ICECC_LOG_FILE="/var/log/iceccd.log"
ICECC_LOG_FILE="/var/log/iceccd.log"
#
# Identification for the network the scheduler and daemon run on. 
# You can have several distinct icecc networks in the same LAN
# for whatever reason.
#
# ICECC_NETNAME=""
ICECC_NETNAME="oe"
# 
# You can overwrite here the number of jobs to run in parallel. Per
# default this depends on the number of (virtual) CPUs installed. 
#
# ICECC_MAX_JOBS=""
ICECC_MAX_JOBS="3"
#
# This is the directory where the icecc daemon stores the environments
# it compiles in. In a big network this can grow quite a bit, so use some
# path if your /tmp is small - but the user icecc has to write to it.
# 
# ICECC_BASEDIR="/var/cache/icecc"
ICECC_BASEDIR="/var/cache/icecc"
#
# icecc scheduler log file
#
# ICECC_SCHEDULER_LOG_FILE="/var/log/icecc_scheduler.log"
ICECC_SCHEDULER_LOG_FILE="/var/log/icecc_scheduler.log"
#
# If the daemon can't find the scheduler by broadcast (e.g. because 
# of a firewall) you can specify it.
#
# ICECC_SCHEDULER_HOST=""
ICECC_SCHEDULER_HOST=""


Local Configuration

A sample local.conf entry for icecc that does not distribute compiles jobs for native packages looks like this. Change the paths to match your setup

PARALLEL_MAKE = "-j 10"
ICECC_PATH = "/usr/bin/icecc"
#ICECC_ENV_EXEC = "/proj/oplinux-0.2/op-linux/branches/oplinux-0.2/tmp/icecc-create-env"
ICECC_USER_CLASS_BL = " native"
INHERIT += "icecc"

Icecc and sstate interaction

Inheriting icecc will change checksums for all task, so you cannot reuse sstate from some remote SSTATE_MIRROR if you're inheriting with icecc and builder which is populating SSTATE_MIRROR isn't.

In order to reuse sstate archives you need to inherit icecc on all builders. For builders where icecc doesn't make sense (e.g. laptop when traveling), you can disable it with ICECC_DISBLED, that will disable icecc functionality while keepipng the same checksums.

Successes and Failures

This whole explanation is probably highly dependent on the icecc version used. Please do add success reports here. Also note that a lot of packages turn PARALLEL_MAKE off, as they break if compiled on a single machine, although i have noticed that they do not fail under icecc. Any feedback on this would be helpful

  • mixing versions of the icecc package can create problems. For example the icecc package from dapper and edgy are incompatible.
  • I (Laibsch) have a working setup between two edgy machines now. I had trouble until I set ICECC_NETNAME and ICECC_SCHEDULER_HOST to the respective, pingable hostnames of the machines. This is weird since I am on a LAN (192.168.1.x) for both machines and there is no firewall.