Platform Development with Poky
Software development Poky supports several methods of software development. You can use the method that is best for you. This chapter describes each development method.
External Development Using the Poky SDK The meta-toolchain and meta-toolchain-sdk targets (see the images section) build tarballs that contain toolchains and libraries suitable for application development outside of Poky. These tarballs unpack into the /opt/poky directory and contain a setup script (e.g. /opt/poky/environment-setup-i586-poky-linux, which you can source to initialize a suitable environment. Sourcing these adds the compiler, QEMU scripts, QEMU binary, a special version of pkgconfig and other useful utilities to the PATH variable. Variables to assist pkgconfig and autotools are also set so that, for example, configure can find pre-generated test results for tests that need target hardware on which to run. Using the toolchain with autotool-enabled packages is straightforward - just pass the appropriate host option to configure as in the following example: $ ./configure --host=arm-poky-linux-gnueabi For other projects it is usually a case of ensuring the cross tools are used: CC=arm-poky-linux-gnueabi-gcc and LD=arm-poky-linux-gnueabi-ld
Using the Eclipse and Anjuta Plug-ins Yocto Project supports both Anjuta and Eclipse IDE plug-ins to make developing software easier for the application developer. The plug-ins provide capability extensions to the graphical IDE allowing for cross compilation, deployment and execution of the output in a QEMU emulation session. Support of these plug-ins also supports cross debugging and profiling. Additionally, the Eclipse plug-in provides a suite of tools that allows the developer to perform remote profiling, tracing, collection of power data, collection of latency data and collection of performance data.
The Eclipse Plug-in To use the Eclipse plug-in, a toolchain and SDK built by Poky is required along with the Eclipse Framework (Helios 3.6). To install the plug-in you need to be in the Eclipse IDE and select the following menu: Help -> Install New Software Specify the target URL as http://yocto./download (real link needed). If you want to download the source code for the plug-in you can find it in the Poky git repository, which has a web interface, and is located at .
Installing and Setting up the Eclipse IDE If you don't have the Eclipse IDE (Helios 3.6) on your system you need to download and install it from . Choose the Eclipse Classic, which contains the Eclipse Platform, Java Development Tools (JDT), and the Plug-in Development Environment. NOTE: Due to the Java Virtual Machine's garbage collection (GC) process the permanent generation space (PermGen) is not cleaned up. This space is used to store meta-data descriptions of classes. The default value is set too small and it could trigger an out of memory error like the following: Java.lang.OutOfMemoryError: PermGen space This error causes the applications to hang. To fix this issue you can use the -vmargs option when you start Eclipse to increase the size of the permenant generation space: Eclipse -vmargs -XX:PermSize=256M
Installing the Yocto Plug-in Once you have the Eclipse IDE installed and configure you need to install the Yocto plug-in. You do this similar to installing the Eclipse plug-ins in the previous section. Do the following to install the Yocto plug-in into the Eclipse IDE: Select the "Help -> Install New Software" item. In the "Work with:" area click "Add..." and enter the URL for the Yocto plug-in (we need to supply this URL). Finish out the installation of the update similar to any other Eclipse plug-in.
Configuring Yocto Eclipse plug-in To configure the Yocto Eclipse plug-in you need to select the mode and then the architecture with which you will be working. Start by selecting "Preferences" from the "Window" menu and then selecting "Yocto SDK". If you normally will use an installed Yocto SDK (under /opt/poky) select “SDK Root Mode”. Otherwise, if your crosstool chain and sysroot are within your poky tree, select “Poky Tree Mode”. If you are in SDK Root Mode you will need to provide your poky tree path, for example, $<Poky_tree>/build/. Now you need to select the architecture. Use the drop down list and select the architecture that you’ll be primarily working against. For target option, select your typical target QEMU vs External HW. If you choose QEMU, you’ll need to specify your QEMU kernel file with full path and the rootfs mount point. Yocto QEMU boots off user mode NFS, Please refer to QEMU section for how to set it up. (Section TBD) Save all your settings and they become your defaults for every new Yocto project created using the Eclipse IDE.
Using the Yocto Eclipse Plug-in As an example, this section shows you how to cross-compile a Yocto C autotools based project, deploy it into QEMU, and then run the debugger against it. You need to configure the project, trigger autogen.sh, build the image, start QEMU, and then debug. Creating a Yocto Autotools Based Project Using a Template: Get to the Wizard selection by selecting the File -> New -> Project menu. Expand "C/C++" and select "C Project". Click "Next" and select a template to start with, for example "Hello World ANSI C Project". Complete the steps to create a new Yocto autotools based project using this template. Specify Specific Toolchain Configurations: By default the project uses the Yocto preferences settings as defined using the procedure in the previous section. If there are any specific setup requirements for the newly created project you need to reconfigure the Yocto plug-in through the menu selection Project -> Invoke Yocto Tools -> Reconfigure Yocto. Use this dialogue to specify specific toolchain and QEMU setups for the project. Building the Project: Trigger autogen.sh through Project -> Reconfigure Project. Then build the project using Project -> Build. Starting QEMU: Use the Run -> External Tools menu and see if there is a QEMU instance for the desired target. If there is click on the instance to start QEMU. If your target is not there then click "External Tools Configuration". You should find an instance of QEMU for your architecture under the entry under "Program". After the boot completes you are ready to deploy the image into QEMU. Debugging: To bring up your remote debugging configuration in the right-hand window highlight your project in “Project Explorer”, select the Run -> Debug Configurations menu item and expand “C/C++ Remote Application”. Next, select projectname_ gdb_target-poky-linux. You need to be sure that there is an entry for the remote target you want to deploy and cross debug with. If there is no entry then click "New..." to bring up the wizard. Using the wizard select TCF and enter the IP address of you remote target in the “Host name:” field. Back in the remote debug configure window, you need to specify the absolute path for the program on the remote target in the “Remote Absolute File Path for C/C++ Application” field. By default, the program deploys into the remote target. If you don't want this then check “Skip download to target path”. Finally, click "Debug” to start the remote debugging session.
Using Yocto Eclipse plug-in Remote Tools Suite Remote tools let you do things like perform system profiling, kernel tracing, examine power consumption, and so forth. To see and access the remote tools use the Window -> YoctoTools menu. Once you pick a tool you need to configure it for the remote target. Every tool needs to have the connection configured. You have to select an existing TCF-based RSE connection to the remote target. If one does not exist you need to create one by clicking "New" Here are some specifics about the remote tools: Oprofile: Selecting this tool causes the oprofile-server on the remote target to launch on the local host machine. To use the oprofile the oprofile-viewer must be installed on the local host machine and the oprofile-server must be installed on the remote target. lttng: Selecting this tool runs ustrace on the remote target, transfers the output data back to the local host machine and uses lttv-gui to graphically display the output. To use this tool the lttv-gui must be installed on the local host machine. See for information on how to use lttng to trace an application. For "Application" you must supply the absolute path name to the application to be traced by user mode lttng. For example, typing /path/to/foo" triggers usttrace /path/to/foo on the remote target to trace the program /path/to/foo. "Argument" is passed to "usttrace" running on the remote target. powertop: Selecting this tool runs powertop on the remote target machine and displays the result in a new view called "powertop". "Time to gather data(sec):" is the time passed in seconds before data is gathered from the remote target for analysis. "show pids in wakeups list:" corresponds to the -p argument passed to powertop latencytop and perf: The latencytop identifies system latency, while perf monitors the system's performance counter registers. Selecting either of these tools causes an RSE terminal view to appear in which you can run the tools. Both tools refresh the entire screen to display results while they run.
The Anjuta Plug-in Note: We will stop Anjuta plug-in support after Yocto project 0.9 release. Its source code can be downloaded from git respository listed below, and free for the community to continue supporting it moving forward. An Anjuta IDE plugin exists to make developing software within the Poky framework easier for the application developer. It presents a graphical IDE with which you can cross compile an application then deploy and execute the output in a QEMU emulation session. It also supports cross debugging and profiling. To use the plugin, a toolchain and SDK built by Poky is required, Anjuta, it's development headers and the Anjuta plugin. The Poky Anjuta plugin is available to download as a tarball at the OpenedHand labs page or directly from the Poky Git repository located at . You can also access a web interface to the repository at . See the README file contained in the project for more information on Anjuta dependencies and building the plugin. If you want to disable remote gdb debugging, please pass the --diable-gdb-integration switch when doing configure.
Setting Up the Anjuta Plug-in Follow these steps to set up the plug-in: Extract the tarball for the toolchain into / as root. The toolchain will be installed into /opt/poky. To use the plug-in, first open or create an existing project. If you are creating a new project, the "C GTK+" project type will allow itself to be cross-compiled. However you should be aware that this uses glade for the UI. To activate the plug-in go to Edit -> Preferences, then choose General from the left hand side. Choose the Installed plug-ins tab, scroll down to Poky SDK and check the box. The plug-in is now activated but not configured. See the next section to learn how to configure it.
Configuring the Anjuta Plugin You can find the configuration options for the SDK by choosing the Poky SDK icon from the left hand side. You need to set the following options: SDK root: If you use an external toolchain you need to set SDK root. This is the root directory of the SDK's sysroot. For an i586 SDK this will be /opt/poky/. This directory will contain bin, include , var and so forth under your selected target architecture subdirectory /opt/poky/sysroot/i586-poky-linux/. The cross comple tools you need are in /opt/poky/sysroot/i586-pokysdk-linux/. Poky root: If you have a local poky build tree, you need to set the Poky root. This is the root directory of the poky build tree. If you build your i586 target architecture under the subdirectory of build_x86 within your poky tree, the Poky root directory should be $<poky_tree>/build_x86/. Target Architecture: This is the cross compile triplet, for example, "i586-poky-linux". This target triplet is the prefix extracted from the set up script file name. For example, "i586-poky-linux" is extracted from the set up script file /opt/poky/environment-setup-i586-poky-linux. Kernel: Use the file chooser to select the kernel to use with QEMU. Root filesystem: Use the file chooser to select the root filesystem directory. This directory is where you use the poky-extract-sdk to extract the poky-image-sdk tarball.
Using the Anjuta Plug-in This section uses an example that cross-compiles a project, deploys it into QEMU, runs a debugger against it and then does a system wide profile. Choose Build -> Run Configure or Build -> Run Autogenerate to run "configure" or autogen, respectively for the project. Either command passes command-line arguments to instruct the cross-compile. Select Build -> Build Project to build and compile the project. If you have previously built the project in the same tree without using the cross-compiler you might find that your project fails to link. If this is the case, simply select Build -> Clean Project to remove the old binaries. After you clean the project you can then try building it again. Start QEMU by selecting Tools -> Start QEMU. This menu selection starts QEMU and will show any error messages in the message view. Once Poky has fully booted within QEMU you can now deploy the project into it. Once the project is built and you have QEMU running choose Tools -> Deploy. This selection installs the package into a temporary directory and then copies using rsync over SSH into the target. Progress and messages appear in the message view. To debug a program installed onto the target choose Tools -> Debug remote. This selection prompts you for the local binary to debug and also the command line to run on the target. The command line to run should include the full path to the to binary installed in the target. This will start a gdbserver over SSH on the target and also an instance of a cross-gdb in a local terminal. This will be preloaded to connect to the server and use the SDK root to find symbols. This gdb will connect to the target and load in various libraries and the target program. You should setup any breakpoints or watchpoints now since you might not be able to interrupt the execution later. You can stop the debugger on the target using Tools -> Stop debugger. It is also possible to execute a command in the target over SSH, the appropriate environment will be be set for the execution. Choose Tools -> Run remote to do this. This selection opens a terminal with the SSH command inside. To do a system wide profile against the system running in QEMU choose Tools -> Profile remote. This selection starts up OProfileUI with the appropriate parameters to connect to the server running inside QEMU and also supplies the path to the debug information necessary to get a useful profile.
Developing externally in QEMU Running Poky QEMU images is covered in the Running an Image section. Poky's QEMU images contain a complete native toolchain. This means that applications can be developed within QEMU in the same was as a normal system. Using qemux86 on an x86 machine is fast since the guest and host architectures match, qemuarm is slower but gives faithful emulation of ARM specific issues. To speed things up these images support using distcc to call a cross-compiler outside the emulated system too. If runqemu was used to start QEMU, and distccd is present on the host system, any bitbake cross compiling toolchain available from the build system will automatically be used from within qemu simply by calling distcc (export CC="distcc" can be set in the enviroment). Alterntatively, if a suitable SDK/toolchain is present in /opt/poky it will also automatically be used. There are several options for connecting into the emulated system. QEMU provides a framebuffer interface which has standard consoles available. There is also a serial connection available which has a console to the system running on it and IP networking as standard. The images have a dropbear ssh server running with the root password disabled allowing standard ssh and scp commands to work. The images also contain an NFS server exporting the guest's root filesystem allowing that to be made available to the host.
Developing in Poky directly Working directly in Poky is a fast and effective development technique. The idea is that you can directly edit files in WORKDIR or the source directory S and then force specific tasks to rerun in order to test the changes. An example session working on the matchbox-desktop package might look like this: $ bitbake matchbox-desktop $ sh $ cd tmp/work/armv5te-poky-linux-gnueabi/matchbox-desktop-2.0+svnr1708-r0/ $ cd matchbox-desktop-2 $ vi src/main.c $ exit $ bitbake matchbox-desktop -c compile -f $ bitbake matchbox-desktop Here, we build the package, change into the work directory for the package, change a file, then recompile the package. Instead of using sh like this, you can also use two different terminals. The risk with working like this is that a command like unpack could wipe out the changes you've made to the work directory so you need to work carefully. It is useful when making changes directly to the work directory files to do so using quilt as detailed in the modifying packages with quilt section. The resulting patches can be copied into the recipe directory and used directly in the SRC_URI. For a review of the skills used in this section see Sections 2.1.1 and 2.4.2.
Developing with 'devshell' When debugging certain commands or even to just edit packages, the 'devshell' can be a useful tool. To start it you run a command like: $ bitbake matchbox-desktop -c devshell which will open a terminal with a shell prompt within the Poky environment. This means PATH is setup to include the cross toolchain, the pkgconfig variables are setup to find the right .pc files, configure will be able to find the Poky site files etc. Within this environment, you can run configure or compile command as if they were being run by Poky itself. You are also changed into the source (S) directory automatically. When finished with the shell just exit it or close the terminal window. The default shell used by devshell is the gnome-terminal. Other forms of terminal can also be used by setting the TERMCMD and TERMCMDRUN variables in local.conf. For examples of the other options available, see meta/conf/bitbake.conf. An external shell is launched rather than opening directly into the original terminal window to make interaction with bitbakes multiple threads easier and also allow a client/server split of bitbake in the future (devshell will still work over X11 forwarding or similar). It is worth remembering that inside devshell you need to use the full compiler name such as arm-poky-linux-gnueabi-gcc instead of just gcc and the same applies to other applications from gcc, bintuils, libtool etc. Poky will have setup environmental variables such as CC to assist applications, such as make, find the correct tools.
Developing within Poky with an external SCM based package If you're working on a recipe which pulls from an external SCM it is possible to have Poky notice new changes added to the SCM and then build the latest version. This only works for SCMs where its possible to get a sensible revision number for changes. Currently it works for svn, git and bzr repositories. To enable this behaviour it is simply a case of adding SRCREV_pn- PN = "${AUTOREV}" to local.conf where PN is the name of the package for which you want to enable automatic source revision updating.
Debugging with GDB Remotely GDB (The GNU Project Debugger) allows you to examine running programs to understand and fix problems and also to perform postmortem style analsys of program crashes. It is available as a package within poky and installed by default in sdk images. It works best when -dbg packages for the application being debugged are installed as the extra symbols give more meaningful output from GDB. Sometimes, due to memory or disk space constraints, it is not possible to use GDB directly on the remote target to debug applications. This is due to the fact that GDB needs to load the debugging information and the binaries of the process being debugged. GDB then needs to perform many computations to locate information such as function names, variable names and values, stack traces, etc. even before starting the debugging process. This places load on the target system and can alter the characteristics of the program being debugged. This is where GDBSERVER comes into play as it runs on the remote target and does not load any debugging information from the debugged process. Instead, the debugging information processing is done by a GDB instance running on a distant computer - the host GDB. The host GDB then sends control commands to GDBSERVER to make it stop or start the debugged program, as well as read or write some memory regions of that debugged program. All the debugging information loading and processing as well as the heavy debugging duty is done by the host GDB, giving the GDBSERVER running on the target a chance to remain small and fast. As the host GDB is responsible for loading the debugging information and doing the necessary processing to make actual debugging happen, the user has to make sure it can access the unstripped binaries complete with their debugging information and compiled with no optimisations. The host GDB must also have local access to all the libraries used by the debugged program. On the remote target the binaries can remain stripped as GDBSERVER does not need any debugging information there. However they must also be compiled without optimisation matching the host's binaries. The binary being debugged on the remote target machine is hence referred to as the 'inferior' in keeping with GDB documentation and terminology. Further documentation on GDB, is available on on their site.
Launching GDBSERVER on the target First, make sure gdbserver is installed on the target. If not, install the gdbserver package (which needs the libthread-db1 package). To launch GDBSERVER on the target and make it ready to "debug" a program located at /path/to/inferior, connect to the target and launch: $ gdbserver localhost:2345 /path/to/inferior After that, gdbserver should be listening on port 2345 for debugging commands coming from a remote GDB process running on the host computer. Communication between the GDBSERVER and the host GDB will be done using TCP. To use other communication protocols please refer to the GDBSERVER documentation.
Launching GDB on the host computer Running GDB on the host computer takes a number of stages, described in the following sections.
Build the cross GDB package A suitable gdb cross binary is required which runs on your host computer but knows about the the ABI of the remote target. This can be obtained from the the Poky toolchain, e.g. /usr/local/poky/eabi-glibc/arm/bin/arm-poky-linux-gnueabi-gdb which "arm" is the target architecture and "linux-gnueabi" the target ABI. Alternatively this can be built directly by Poky. To do this you would build the gdb-cross package so for example you would run: bitbake gdb-cross Once built, the cross gdb binary can be found at tmp/sysroots/<host-arch</usr/bin/<target-abi>-gdb
Making the inferior binaries available The inferior binary needs to be available to GDB complete with all debugging symbols in order to get the best possible results along with any libraries the inferior depends on and their debugging symbols. There are a number of ways this can be done. Perhaps the easiest is to have an 'sdk' image corresponding to the plain image installed on the device. In the case of 'pky-image-sato', 'poky-image-sdk' would contain suitable symbols. The sdk images already have the debugging symbols installed so its just a question expanding the archive to some location and telling GDB where this is. Alternatively, poky can build a custom directory of files for a specific debugging purpose by reusing its tmp/rootfs directory, on the host computer in a slightly different way to normal. This directory contains the contents of the last built image. This process assumes the image running on the target was the last image to be built by Poky, the package foo contains the inferior binary to be debugged has been built without without optimisation and has debugging information available. Firstly you want to install the foo package to tmp/rootfs by doing: tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \ tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf -o \ tmp/rootfs/ update then, tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \ tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf \ -o tmp/rootfs install foo tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \ tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf \ -o tmp/rootfs install foo-dbg which installs the debugging information too.
Launch the host GDB To launch the host GDB, run the cross gdb binary identified above with the inferior binary specified on the commandline: <target-abi>-gdb rootfs/usr/bin/foo This loads the binary of program foo as well as its debugging information. Once the gdb prompt appears, you must instruct GDB to load all the libraries of the inferior from tmp/rootfs: set solib-absolute-prefix /path/to/tmp/rootfs where /path/to/tmp/rootfs must be the absolute path to tmp/rootfs or wherever the binaries with debugging information are located. Now, tell GDB to connect to the GDBSERVER running on the remote target: target remote remote-target-ip-address:2345 Where remote-target-ip-address is the IP address of the remote target where the GDBSERVER is running. 2345 is the port on which the GDBSERVER is running.
Using the Debugger Debugging can now proceed as normal, as if the debugging were being done on the local machine, for example to tell GDB to break in the main function, for instance: break main and then to tell GDB to "continue" the inferior execution, continue For more information about using GDB please see the project's online documentation at .
Profiling with OProfile OProfile is a statistical profiler well suited to finding performance bottlenecks in both userspace software and the kernel. It provides answers to questions like "Which functions does my application spend the most time in when doing X?". Poky is well integrated with OProfile to make profiling applications on target hardware straightforward. To use OProfile you need an image with OProfile installed. The easiest way to do this is with "tools-profile" in IMAGE_FEATURES. You also need debugging symbols to be available on the system where the analysis will take place. This can be achieved with "dbg-pkgs" in IMAGE_FEATURES or by installing the appropriate -dbg packages. For successful call graph analysis the binaries must preserve the frame pointer register and hence should be compiled with the "-fno-omit-framepointer" flag. In Poky this can be achieved with SELECTED_OPTIMIZATION = "-fexpensive-optimizations -fno-omit-framepointer -frename-registers -O2" or by setting DEBUG_BUILD = "1" in local.conf (the latter will also add extra debug information making the debug packages large).
Profiling on the target All the profiling work can be performed on the target device. A simple OProfile session might look like: # opcontrol --reset # opcontrol --start --separate=lib --no-vmlinux -c 5 [do whatever is being profiled] # opcontrol --stop $ opreport -cl Here, the reset command clears any previously profiled data, OProfile is then started. The options used to start OProfile mean dynamic library data is kept separately per application, kernel profiling is disabled and callgraphing is enabled up to 5 levels deep. To profile the kernel, you would specify the --vmlinux=/path/to/vmlinux option (the vmlinux file is usually in /boot/ in Poky and must match the running kernel). The profile is then stopped and the results viewed with opreport with options to see the separate library symbols and callgraph information. Callgraphing means OProfile not only logs infomation about which functions time is being spent in but also which functions called those functions (their parents) and which functions that function calls (its children). The higher the callgraphing depth, the more accurate the results but this also increased the loging overhead so it should be used with caution. On ARM, binaries need to have the frame pointer enabled for callgraphing to work (compile with the gcc option -fno-omit-framepointer). For more information on using OProfile please see the OProfile online documentation at .
Using OProfileUI A graphical user interface for OProfile is also available. You can download and build it from svn at . If the "tools-profile" image feature is selected, all necessary binaries are installed onto the target device for OProfileUI interaction. In order to convert the data in the sample format from the target to the host the opimport program is needed. This is not included in standard Debian OProfile packages but an OProfile package with this addition is also available from the OpenedHand repository. We recommend using OProfile 0.9.3 or greater. Other patches to OProfile may be needed for recent OProfileUI features, but Poky usually includes all needed patches on the target device. Please see the OProfileUI README for up to date information, and the OProfileUI website for more information on the OProfileUI project.
Online mode This assumes a working network connection with the target hardware. In this case you just need to run "oprofile-server" on the device. By default it listens on port 4224. This can be changed with the --port command line option. The client program is called oprofile-viewer. The UI is relatively straightforward, the key functionality is accessed through the buttons on the toolbar (which are duplicated in the menus.) These buttons are: Connect - connect to the remote host, the IP address or hostname for the target can be supplied here. Disconnect - disconnect from the target. Start - start the profiling on the device. Stop - stop the profiling on the device and download the data to the local host. This will generate the profile and show it in the viewer. Download - download the data from the target, generate the profile and show it in the viewer. Reset - reset the sample data on the device. This will remove the sample information that was collected on a previous sampling run. Ensure you do this if you do not want to include old sample information. Save - save the data downloaded from the target to another directory for later examination. Open - load data that was previously saved. The behaviour of the client is to download the complete 'profile archive' from the target to the host for processing. This archive is a directory containing the sample data, the object files and the debug information for said object files. This archive is then converted using a script included in this distribution ('oparchconv') that uses 'opimport' to convert the archive from the target to something that can be processed on the host. Downloaded archives are kept in /tmp and cleared up when they are no longer in use. If you wish to profile into the kernel, this is possible, you just need to ensure a vmlinux file matching the running kernel is available. In Poky this is usually located in /boot/vmlinux-KERNELVERSION, where KERNEL-version is the version of the kernel e.g. 2.6.23. Poky generates separate vmlinux packages for each kernel it builds so it should be a question of just ensuring a matching package is installed ( opkg install kernel-vmlinux. These are automatically installed into development and profiling images alongside OProfile. There is a configuration option within the OProfileUI settings page where the location of the vmlinux file can be entered. Waiting for debug symbols to transfer from the device can be slow and it's not always necessary to actually have them on device for OProfile use. All that is needed is a copy of the filesystem with the debug symbols present on the viewer system. The GDB remote debug section covers how to create such a directory with Poky and the location of this directory can again be specified in the OProfileUI settings dialog. If specified, it will be used where the file checksums match those on the system being profiled.
Offline mode If no network access to the target is available an archive for processing in 'oprofile-viewer' can be generated with the following set of command. # opcontrol --reset # opcontrol --start --separate=lib --no-vmlinux -c 5 [do whatever is being profiled] # opcontrol --stop # oparchive -o my_archive Where my_archive is the name of the archive directory where you would like the profile archive to be kept. The directory will be created for you. This can then be copied to another host and loaded using 'oprofile-viewer''s open functionality. The archive will be converted if necessary.