Platform Development with Poky
Software development Poky supports several methods of software development. You can use the method that is best for you. This chapter describes each development method.
External Development Using the Poky SDK The meta-toolchain and meta-toolchain-sdk targets build tarballs that contain toolchains and libraries suitable for application development outside of Poky. For information on these targets see the Reference: Images appendix. These tarballs unpack into the /opt/poky directory and contain a setup script (e.g. /opt/poky/environment-setup-i586-poky-linux), from which you can source to initialize a suitable environment. Sourcing these files adds the compiler, QEMU scripts, QEMU binary, a special version of pkgconfig and other useful utilities to the PATH variable. Variables to assist pkgconfig and autotools are also defined so that, for example, configure can find pre-generated test results for tests that need target hardware on which to run. Using the toolchain with autotool-enabled packages is straightforward - just pass the appropriate host option to configure. Following is an example: $ ./configure --host=arm-poky-linux-gnueabi For other projects it is usually a case of ensuring the cross tools are used: CC=arm-poky-linux-gnueabi-gcc and LD=arm-poky-linux-gnueabi-ld
Using the Eclipse and Anjuta Plug-ins Yocto Project supports both Anjuta and Eclipse IDE plug-ins to make developing software easier for the application developer. The plug-ins provide capability extensions to the graphical IDE allowing for cross compilation, deployment and execution of the output in a QEMU emulation session. Support of these plug-ins also allows for cross debugging and profiling. Additionally, the Eclipse plug-in provides a suite of tools that allows the developer to perform remote profiling, tracing, collection of power data, collection of latency data and collection of performance data.
The Eclipse Plug-in To use the Eclipse plug-in, a toolchain and SDK built by Poky is required along with the Eclipse Framework (Helios 3.6.1). To install the plug-in you need to be in the Eclipse IDE and select the following menu: Help -> Install New Software Specify the target URL as . If you want to download the source code for the plug-in you can find it in the Poky git repository, which has a web interface, and is located at .
Installing and Setting up the Eclipse IDE If you don't have the Eclipse IDE (Helios 3.6.1) on your system you need to download and install it from . Choose the Eclipse Classic, which contains the Eclipse Platform, Java Development Tools (JDT), and the Plug-in Development Environment. Due to the Java Virtual Machine's garbage collection (GC) process the permanent generation space (PermGen) is not cleaned up. This space stores meta-data descriptions of classes. The default value is set too small and it could trigger an out-of-memory error like the following: Java.lang.OutOfMemoryError: PermGen space This error causes the applications to hang. To fix this issue you can use the -vmargs option when you start Eclipse to increase the size of the permenant generation space: Eclipse -vmargs -XX:PermSize=256M
Installing the Yocto Plug-in Once you have the Eclipse IDE installed and configured you need to install the Yocto plug-in. You do this similar to installing the Eclipse plug-ins in the previous section. Do the following to install the Yocto plug-in into the Eclipse IDE: Select the "Help -> Install New Software" item. In the "Work with:" area click "Add..." and enter the URL for the Yocto plug-in, which is Finish out the installation of the update similar to any other Eclipse plug-in.
Configuring Yocto Eclipse plug-in To configure the Yocto Eclipse plug-in you need to select the mode and the architecture with which you will be working. Start by selecting "Preferences" from the "Window" menu and then select "Yocto SDK". If you normally will use an installed Yocto SDK (under /opt/poky) select “SDK Root Mode”. Otherwise, if your crosstool chain and sysroot are within your poky tree, select “Poky Tree Mode”. If you are in SDK Root Mode you need to provide your poky tree path, for example, $<Poky_tree>/build/. Next, you need to select the architecture. Use the drop down list and select the architecture that you’ll be primarily working against. For target option, select your typical target QEMU vs External HW. If you choose QEMU, you’ll need to specify your QEMU kernel file with full path and the rootfs mount point. Yocto QEMU boots off user mode NFS. See the Developing Externally in QEMU section for how to set it up. To make your settings the defaults for every new Yocto project created using the Eclipse IDE, simply save the settings.
Using the Yocto Eclipse Plug-in As an example, this section shows you how to cross-compile a Yocto C project that is autotools-based, deploy the project into QEMU, and then run the debugger against it. You need to configure the project, trigger the autogen.sh, build the image, start QEMU, and then debug. The following steps show how to create a Yocto autotools-based project using a given template: Select "File -> New -> Project" to start the wizard. Expand "C/C++" and select "C Project". Click "Next" and select a template (e.g. "Hello World ANSI C Project"). Complete the steps to create the new Yocto autotools-based project using your chosen template. By default, the project uses the Yocto preferences settings as defined using the procedure in the previous section. If there are any specific setup requirements for the newly created project you need to reconfigure the Yocto plug-in through the menu selection by doing the following: Select the "Project -> Invoke Yocto Tools -> Reconfigure Yocto" menu item. Complete the dialogue to specify the specific toolchain and QEMU setup information. To build the project follow these steps: Select "Project -> Reconfigure Project" to trigger the autogen.sh command. Select "Project -> Build" to build the project. To start QEMU follow these steps: Select "Run -> External Tools" and see if there is a QEMU instance for the desired target. If one exists, click on the instance to start QEMU. If your target does not exist, click "External Tools Configuration" and you should find an instance of QEMU for your architecture under the entry under "Program". Wait for the boot to complete. To deploy your project and start debugging follow these steps: Highlight your project in the project explorer. Select "Run -> Debug Configurations" to bring up your remote debugging configuration in the right-hand window. Expand “C/C++ Remote Application”. Select "projectname_ gdb_target-poky-linux". You need to be sure there is an entry for the remote target. If no entry exists, click "New..." to bring up the wizard. Use the wizard to select TCF and enter the IP address of you remote target in the “Host name:” field. Back in the Remote Debug Configure window, specify in the “Remote Absolute File Path for C/C++ Application” field the absolute path for the program on the remote target. By default, the program deploys into the remote target. If you don't want this behavior then check “Skip download to target path”. Click "Debug” to start the remote debugging session.
Using Yocto Eclipse plug-in Remote Tools Suite Remote tools allow you to perform system profiling, kernel tracing, examine power consumption, and so forth. To see and access the remote tools use the "Window -> YoctoTools" menu. Once you pick a tool you need to configure it for the remote target. Every tool needs to have the connection configured. You must select an existing TCF-based RSE connection to the remote target. If one does not exist, click "New" to create one. Here are some specifics about the remote tools: Oprofile: Selecting this tool causes the oprofile-server on the remote target to launch on the local host machine. The oprofile-viewer must be installed on the local host machine and the oprofile-server must be installed on the remote target, respectively, in order to use . lttng: Selecting this tool runs "ustrace" on the remote target, transfers the output data back to the local host machine and uses "lttv-gui" to graphically display the output. The "lttv-gui" must be installed on the local host machine to use this tool. For information on how to use "lttng" to trace an application, see . For "Application" you must supply the absolute path name of the application to be traced by user mode lttng. For example, typing /path/to/foo" triggers "usttrace /path/to/foo" on the remote target to trace the program /path/to/foo. "Argument" is passed to "usttrace" running on the remote target. powertop: Selecting this tool runs "powertop" on the remote target machine and displays the results in a new view called "powertop". "Time to gather data(sec):" is the time passed in seconds before data is gathered from the remote target for analysis. "show pids in wakeups list:" corresponds to the -p argument passed to "powertop". latencytop and perf: "latencytop" identifies system latency, while "perf" monitors the system's performance counter registers. Selecting either of these tools causes an RSE terminal view to appear from which you can run the tools. Both tools refresh the entire screen to display results while they run.
The Anjuta Plug-in Support for the Anjuta plug-in ends after Yocto project 0.9 release. However, the source code can be downloaded from the git repository listed later in this section. The community is free to continue supporting it post 0.9 release. An Anjuta IDE plugin exists to make developing software within the Poky framework easier for the application developer familiar with that environment. The plug-in presents a graphical IDE that allows you to cross-compile, cross-debug, profile, deploy, and execute an application. To use the plugin, a toolchain and SDK built by Poky, Anjuta, it's development headers and the Anjuta Plug-in are all required. The Poky Anjuta Plug-in is available to download as a tarball at the OpenedHand labs page or directly from the Poky Git repository located at . You can also access a web interface to the repository at . See the README file contained in the project for more information on Anjuta dependencies and building the plug-in. If you want to disable remote gdb debugging, pass the "--diable-gdb-integration" switch when you configure the plug-in.
Setting Up the Anjuta Plug-in Follow these steps to set up the plug-in: Extract the tarball for the toolchain into / as root. The toolchain will be installed into /opt/poky. To use the plug-in, first open or create an existing project. If you are creating a new project, the "C GTK+" project type will allow itself to be cross-compiled. However, you should be aware that this type uses "glade" for the UI. To activate the plug-in, select "Edit -> Preferences" and then choose "General" from the left hand side. Choose the "Installed plug-ins" tab, scroll down to "Poky SDK" and check the box. The plug-in is now activated but not configured.
Configuring the Anjuta Plugin You can find the configuration options for the SDK by choosing the Poky SDK icon from the left hand side. You need to define the following options: SDK root: If you use an external toolchain you need to set SDK root, which is the root directory of the SDK's sysroot. For an i586 SDK directory is /opt/poky/. This directory will contain "bin", "include", "var" and so forth under your selected target architecture subdirectory /opt/poky/sysroot/i586-poky-linux/. The cross-compile tools you need are in /opt/poky/sysroot/i586-pokysdk-linux/. Poky root: If you have a local Poky build tree, you need to set the Poky root, which is the root directory of the poky build tree. If you build your i586 target architecture under the subdirectory of build_x86 within your Poky tree, the Poky root directory should be $<poky_tree>/build_x86/. Target Architecture: This is the cross compile triplet, for example, "i586-poky-linux". This target triplet is the prefix extracted from the set up script file's name. For example, if the script file name is /opt/poky/environment-setup-i586-poky-linux then the extracted target triplet is "i586-poky-linux". Kernel: Use the file chooser to select the kernel used with QEMU. Root filesystem: Use the file chooser to select the root filesystem directory. This directory is where you use "poky-extract-sdk" to extract the poky-image-sdk tarball.
Using the Anjuta Plug-in The steps in this section show how to cross-compile a project, deploy it into QEMU, run a debugger against it and then perform a system-wide profile. Choose "Build -> Run Configure" or "Build -> Run Autogenerate" to run "configure" or "autogen", respectively for the project. Either command passes command-line arguments to instruct the cross-compile. Choose "Build -> Build Project" to build and compile the project. If you have previously built the project in the same tree without using the cross-compiler you might find that your project fails to link. If this is the case, simply select "Build -> Clean Project" to remove the old binaries. After you clean the project you can then try building it again. Choose "Tools -> Start QEMU" to start QEMU. After QEMU starts any error messages will appear in the message view. Once Poky has fully booted within QEMU you can deploy the project into it. Once the project is built and you have QEMU running choose "Tools -> Deploy" to install the package into a temporary directory and then copy it using "rsync" over SSH into the target. A progress bar and appropriate messages appear in the message view. To debug a program installed onto the target choose "Tools -> Debug remote". Choosing this menu item causes prompts to appear to define the local binary for debugging and also for the command line used to run on the target. When you provide the command line be sure to include the full path to the to binary installed in the target. When the command line runs a "gdbserver" over SSH is started on the target and an instance of "cross-gdb" starts in a local terminal. The instance of "cross-gdb" will be preloaded to connect to the server and use the SDK root to find symbols. It also connects to the target and loads in various libraries as well as the target program. You should define any breakpoints or watchpoints at this point in the process since you might not be able to interrupt the execution later. To stop the debugger on the target choose "Tools -> Stop debugger". It is also possible to execute a command in the target over SSH. Doing so causes the appropriate environment to be established for execution. To execute a command choose "Choose Tools -> Run remote". This selection opens a terminal with the SSH command inside. To perform a system-wide profile against the system running in QEMU choose "Tools -> Profile remote". This choice starts up "OProfileUI" with the appropriate parameters to connect to the server running inside QEMU and also supplies the path for debug information necessary to get a useful profile.
Developing Externally in QEMU Running Poky QEMU images is covered in the Yocto Project Quick Start in the "A Quick Test Run" section. Poky's QEMU images contain a complete native toolchain. This means you can develop applications within QEMU similar to the way you would in a normal system. Using qemux86 on an x86 machine is fast since the guest and host architectures match. On the other hand, using qemuarm can be slower but gives faithful emulation of ARM-specific issues. To speed things up, these images support using "distcc" to call a cross-compiler outside the emulated system. If "runqemu" was used to start QEMU, and "distccd" is present on the host system, any Bitbake cross-compiling toolchain available from the build system is automatically used from within QEMU simply by calling "distcc". You can accomplish this by defining the cross-compiler variable (e.g. export CC="distcc"). Alterntatively, if a suitable SDK/toolchain is present in /opt/poky it is also automatically be used. There are several options for connecting into the emulated system. QEMU provides a framebuffer interface that has standard consoles available. There is also a serial connection available that has a console to the system running on it and uses standard IP networking. The images have a dropbear ssh server running with the root password disabled to allow standard ssh and scp commands to work. The images also contain an NFS server that exports the guest's root filesystem, which allows it to be made available to the host.
Developing in Poky Directly Working directly in Poky is a fast and effective development technique. The idea is that you can directly edit files in WORKDIR or the source directory S and then force specific tasks to rerun in order to test the changes. An example session working on the matchbox-desktop package might look like this: $ bitbake matchbox-desktop $ sh $ cd tmp/work/armv5te-poky-linux-gnueabi/matchbox-desktop-2.0+svnr1708-r0/ $ cd matchbox-desktop-2 $ vi src/main.c $ exit $ bitbake matchbox-desktop -c compile -f $ bitbake matchbox-desktop This example builds the package, changes into the work directory for the package, changes a file, then recompiles the package. Instead of using "sh" as it is in the example, you can also use two different terminals. However, the risk of using two terminals is that a command like "unpack" could destroy the changes you've made to the work directory. Consequently, you need to work carefully. It is useful when making changes directly to the work directory files to do so using "quilt" as detailed in the modifying packages with quilt section. You can copy the resulting patches into the recipe directory and use them directly in the SRC_URI. For a review of the skills used in this section see the Bitbake and Running Specific Tasks Sections.
Developing with 'devshell' When debugging certain commands or even when just editing packages, the 'devshell' can be a useful tool. Use a command like the following to start this tool. $ bitbake matchbox-desktop -c devshell This command opens a terminal with a shell prompt within the Poky environment. Consequently, the following occurs: The PATH variable includes the cross toolchain. The pkgconfig variables find the correct .pc files. "configure" finds the Poky site files as well as any other necessary files. Within this environment, you can run "configure" or "compile" commands as if they were being run by Poky itself. The working directory also automatically changes to the (S) directory. When you are finished, you just exit the shell or close the terminal window. The default shell used by "devshell" is the gnome-terminal. You can use other forms of terminal can by setting the TERMCMD and TERMCMDRUN variables in local.conf. For examples of the other options available, see meta/conf/bitbake.conf. An external shell is launched rather than opening directly into the original terminal window. This allows easier interaction with Bitbake's multiple threads as well as for a future client/server split. Note that "devshell" will still work over X11 forwarding or similar situations. It is worth remembering that inside "devshell" you need to use the full compiler name such as arm-poky-linux-gnueabi-gcc instead of just gcc. The same applies to other applications such as gcc, bintuils, libtool and so forth. Poky will have setup environmental variables such as CC to assist applications, such as make, find the correct tools.
Developing within Poky with an External SCM-based Package If you're working on a recipe that pulls from an external SCM it is possible to have Poky notice new changes added to the SCM and then build the latest version using them. This only works for SCMs from which it is possible to get a sensible revision number for changes. Currently it works for svn, git and bzr repositories. To enable this behavior simply add SRCREV_pn- PN = "${AUTOREV}" to local.conf, where PN is the name of the package for which you want to enable automatic source revision updating.
Debugging with GDB Remotely GDB (The GNU Project Debugger) allows you to examine running programs to understand and fix problems and also to perform postmortem style analsys of program crashes. It is available as a package within poky and installed by default in sdk images. It works best when -dbg packages for the application being debugged are installed as the extra symbols give more meaningful output from GDB. Sometimes, due to memory or disk space constraints, it is not possible to use GDB directly on the remote target to debug applications. This is due to the fact that GDB needs to load the debugging information and the binaries of the process being debugged. GDB then needs to perform many computations to locate information such as function names, variable names and values, stack traces, etc. even before starting the debugging process. This places load on the target system and can alter the characteristics of the program being debugged. This is where GDBSERVER comes into play as it runs on the remote target and does not load any debugging information from the debugged process. Instead, the debugging information processing is done by a GDB instance running on a distant computer - the host GDB. The host GDB then sends control commands to GDBSERVER to make it stop or start the debugged program, as well as read or write some memory regions of that debugged program. All the debugging information loading and processing as well as the heavy debugging duty is done by the host GDB, giving the GDBSERVER running on the target a chance to remain small and fast. As the host GDB is responsible for loading the debugging information and doing the necessary processing to make actual debugging happen, the user has to make sure it can access the unstripped binaries complete with their debugging information and compiled with no optimisations. The host GDB must also have local access to all the libraries used by the debugged program. On the remote target the binaries can remain stripped as GDBSERVER does not need any debugging information there. However they must also be compiled without optimisation matching the host's binaries. The binary being debugged on the remote target machine is hence referred to as the 'inferior' in keeping with GDB documentation and terminology. Further documentation on GDB, is available on on their site.
Launching GDBSERVER on the target First, make sure gdbserver is installed on the target. If not, install the gdbserver package (which needs the libthread-db1 package). To launch GDBSERVER on the target and make it ready to "debug" a program located at /path/to/inferior, connect to the target and launch: $ gdbserver localhost:2345 /path/to/inferior After that, gdbserver should be listening on port 2345 for debugging commands coming from a remote GDB process running on the host computer. Communication between the GDBSERVER and the host GDB will be done using TCP. To use other communication protocols please refer to the GDBSERVER documentation.
Launching GDB on the host computer Running GDB on the host computer takes a number of stages, described in the following sections.
Build the cross GDB package A suitable gdb cross binary is required which runs on your host computer but knows about the the ABI of the remote target. This can be obtained from the the Poky toolchain, e.g. /usr/local/poky/eabi-glibc/arm/bin/arm-poky-linux-gnueabi-gdb which "arm" is the target architecture and "linux-gnueabi" the target ABI. Alternatively this can be built directly by Poky. To do this you would build the gdb-cross package so for example you would run: bitbake gdb-cross Once built, the cross gdb binary can be found at tmp/sysroots/<host-arch</usr/bin/<target-abi>-gdb
Making the inferior binaries available The inferior binary needs to be available to GDB complete with all debugging symbols in order to get the best possible results along with any libraries the inferior depends on and their debugging symbols. There are a number of ways this can be done. Perhaps the easiest is to have an 'sdk' image corresponding to the plain image installed on the device. In the case of 'pky-image-sato', 'poky-image-sdk' would contain suitable symbols. The sdk images already have the debugging symbols installed so its just a question expanding the archive to some location and telling GDB where this is. Alternatively, poky can build a custom directory of files for a specific debugging purpose by reusing its tmp/rootfs directory, on the host computer in a slightly different way to normal. This directory contains the contents of the last built image. This process assumes the image running on the target was the last image to be built by Poky, the package foo contains the inferior binary to be debugged has been built without without optimisation and has debugging information available. Firstly you want to install the foo package to tmp/rootfs by doing: tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \ tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf -o \ tmp/rootfs/ update then, tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \ tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf \ -o tmp/rootfs install foo tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \ tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf \ -o tmp/rootfs install foo-dbg which installs the debugging information too.
Launch the host GDB To launch the host GDB, run the cross gdb binary identified above with the inferior binary specified on the commandline: <target-abi>-gdb rootfs/usr/bin/foo This loads the binary of program foo as well as its debugging information. Once the gdb prompt appears, you must instruct GDB to load all the libraries of the inferior from tmp/rootfs: set solib-absolute-prefix /path/to/tmp/rootfs where /path/to/tmp/rootfs must be the absolute path to tmp/rootfs or wherever the binaries with debugging information are located. Now, tell GDB to connect to the GDBSERVER running on the remote target: target remote remote-target-ip-address:2345 Where remote-target-ip-address is the IP address of the remote target where the GDBSERVER is running. 2345 is the port on which the GDBSERVER is running.
Using the Debugger Debugging can now proceed as normal, as if the debugging were being done on the local machine, for example to tell GDB to break in the main function, for instance: break main and then to tell GDB to "continue" the inferior execution, continue For more information about using GDB please see the project's online documentation at .
Profiling with OProfile OProfile is a statistical profiler well suited to finding performance bottlenecks in both userspace software and the kernel. It provides answers to questions like "Which functions does my application spend the most time in when doing X?". Poky is well integrated with OProfile to make profiling applications on target hardware straightforward. To use OProfile you need an image with OProfile installed. The easiest way to do this is with "tools-profile" in IMAGE_FEATURES. You also need debugging symbols to be available on the system where the analysis will take place. This can be achieved with "dbg-pkgs" in IMAGE_FEATURES or by installing the appropriate -dbg packages. For successful call graph analysis the binaries must preserve the frame pointer register and hence should be compiled with the "-fno-omit-framepointer" flag. In Poky this can be achieved with SELECTED_OPTIMIZATION = "-fexpensive-optimizations -fno-omit-framepointer -frename-registers -O2" or by setting DEBUG_BUILD = "1" in local.conf (the latter will also add extra debug information making the debug packages large).
Profiling on the target All the profiling work can be performed on the target device. A simple OProfile session might look like: # opcontrol --reset # opcontrol --start --separate=lib --no-vmlinux -c 5 [do whatever is being profiled] # opcontrol --stop $ opreport -cl Here, the reset command clears any previously profiled data, OProfile is then started. The options used to start OProfile mean dynamic library data is kept separately per application, kernel profiling is disabled and callgraphing is enabled up to 5 levels deep. To profile the kernel, you would specify the --vmlinux=/path/to/vmlinux option (the vmlinux file is usually in /boot/ in Poky and must match the running kernel). The profile is then stopped and the results viewed with opreport with options to see the separate library symbols and callgraph information. Callgraphing means OProfile not only logs infomation about which functions time is being spent in but also which functions called those functions (their parents) and which functions that function calls (its children). The higher the callgraphing depth, the more accurate the results but this also increased the loging overhead so it should be used with caution. On ARM, binaries need to have the frame pointer enabled for callgraphing to work (compile with the gcc option -fno-omit-framepointer). For more information on using OProfile please see the OProfile online documentation at .
Using OProfileUI A graphical user interface for OProfile is also available. You can download and build it from svn at . If the "tools-profile" image feature is selected, all necessary binaries are installed onto the target device for OProfileUI interaction. In order to convert the data in the sample format from the target to the host the opimport program is needed. This is not included in standard Debian OProfile packages but an OProfile package with this addition is also available from the OpenedHand repository. We recommend using OProfile 0.9.3 or greater. Other patches to OProfile may be needed for recent OProfileUI features, but Poky usually includes all needed patches on the target device. Please see the OProfileUI README for up to date information, and the OProfileUI website for more information on the OProfileUI project.
Online mode This assumes a working network connection with the target hardware. In this case you just need to run "oprofile-server" on the device. By default it listens on port 4224. This can be changed with the --port command line option. The client program is called oprofile-viewer. The UI is relatively straightforward, the key functionality is accessed through the buttons on the toolbar (which are duplicated in the menus.) These buttons are: Connect - connect to the remote host, the IP address or hostname for the target can be supplied here. Disconnect - disconnect from the target. Start - start the profiling on the device. Stop - stop the profiling on the device and download the data to the local host. This will generate the profile and show it in the viewer. Download - download the data from the target, generate the profile and show it in the viewer. Reset - reset the sample data on the device. This will remove the sample information that was collected on a previous sampling run. Ensure you do this if you do not want to include old sample information. Save - save the data downloaded from the target to another directory for later examination. Open - load data that was previously saved. The behaviour of the client is to download the complete 'profile archive' from the target to the host for processing. This archive is a directory containing the sample data, the object files and the debug information for said object files. This archive is then converted using a script included in this distribution ('oparchconv') that uses 'opimport' to convert the archive from the target to something that can be processed on the host. Downloaded archives are kept in /tmp and cleared up when they are no longer in use. If you wish to profile into the kernel, this is possible, you just need to ensure a vmlinux file matching the running kernel is available. In Poky this is usually located in /boot/vmlinux-KERNELVERSION, where KERNEL-version is the version of the kernel e.g. 2.6.23. Poky generates separate vmlinux packages for each kernel it builds so it should be a question of just ensuring a matching package is installed ( opkg install kernel-vmlinux. These are automatically installed into development and profiling images alongside OProfile. There is a configuration option within the OProfileUI settings page where the location of the vmlinux file can be entered. Waiting for debug symbols to transfer from the device can be slow and it's not always necessary to actually have them on device for OProfile use. All that is needed is a copy of the filesystem with the debug symbols present on the viewer system. The GDB remote debug section covers how to create such a directory with Poky and the location of this directory can again be specified in the OProfileUI settings dialog. If specified, it will be used where the file checksums match those on the system being profiled.
Offline mode If no network access to the target is available an archive for processing in 'oprofile-viewer' can be generated with the following set of command. # opcontrol --reset # opcontrol --start --separate=lib --no-vmlinux -c 5 [do whatever is being profiled] # opcontrol --stop # oparchive -o my_archive Where my_archive is the name of the archive directory where you would like the profile archive to be kept. The directory will be created for you. This can then be copied to another host and loaded using 'oprofile-viewer''s open functionality. The archive will be converted if necessary.