summaryrefslogtreecommitdiffstats
path: root/dynamic-layers/openembedded-layer
Commit message (Collapse)AuthorAgeFilesLines
* dldt-inference-engine: install custom opencl kernels as wellAnuj Mittal2019-11-291-0/+5
| | | | | | | Install the OpenCL kernels and cldnn_global_custom_kernels.xml to allow specification of OpenCL kernels for custom layers. Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: upgrade 2019r3 -> 2019r3.1Chin Huat Ang2019-11-291-1/+1
| | | | | Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: enable vpu by defaultAnuj Mittal2019-10-301-4/+8
| | | | | | | | | * Make sure that USB udev rules for Intel(R) Movidius(TM) Neural Compute Stick and Intel(R) Neural Compute Stick 2 are packaged. * Package vpu firmware only when it is enabled. Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: remove trailing slash from patchdir paramsMartin Jansa2019-10-251-8/+8
| | | | | | | | | | | | | | | | | | | | | | * There is interesting issue in do_patch, I was debugging strange behavior with .bbappend where I've added another small patch. And it started failing to configure completely. bitbake -e shows that all .patch files are in SRC_URI and log.do_patch shows that all were applied, but git diff (as well as patches/series) shows only the last one added from the bbappend to be applied. This was caused by 8 existing patches in .bb file using ;patchdir=../ and my patch in .bbappend using ;patchdir=.. without slash at the end, it should be fixed in quilt (or how do_patch is using it), but for now just drop the trailing slash, because 99.9% recipes use ;patchdir=.. without the slash. It's easily reproducible by removing the slash from the last patch (without any bbappend). Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: use firmware files from SRC_URIMartin Jansa2019-10-252-10/+116
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * use the files from SRC_URI instead of fetching them during configure * use the firmware files CMake would download during configure when not provided already, I haven't found any changelog to find out what's the difference between firmware_ma2x8x_784.zip firmware_ma2x8x_mdk_R9.8.zip firmware_ma2450_759W.zip firmware_ma2450_784.zip firmware_mv0262_784.zip firmware_mv0262_mdk_R9.8.zip so to be safe use whatever CMake does and what I was testing on raspberrypi4 before (because of missing this fix). -- Downloading from https://download.01.org/opencv/2019/openvinotoolkit/R3/inference_engine/firmware_ma2450_759W.zip to dldt-inference-engine/2019r3-r0/git/inference-engine/temp/download/firmware_ma2450_759W.zip ... -- Downloading from https://download.01.org/opencv/2019/openvinotoolkit/R3/inference_engine/firmware_ma2x8x_mdk_R9.8.zip to dldt-inference-engine/2019r3-r0/git/inference-engine/temp/download/firmware_ma2x8x_mdk_R9.8.zip ... -- Downloading from https://download.01.org/opencv/2019/openvinotoolkit/R3/inference_engine/firmware_mv0262_mdk_R9.8.zip to dldt-inference-engine/2019r3-r0/git/inference-engine/temp/download/firmware_mv0262_mdk_R9.8.zip ... * add PACKAGECONFIG verbose for VERBOSE_BUILD so it's easier to debug which firmware files are being used, in this case: log.do_configure will show: -- ma2450=../mvnc/MvNCAPI-ma2450.mvcmd -- ma2x8x=../mvnc/MvNCAPI-ma2x8x.mvcmd -- mv0262=../mvnc/MvNCAPI-mv0262.mvcmd * the patch is very loosely based on the patch removed here: http://git.yoctoproject.org/cgit/cgit.cgi/meta-intel/diff/dynamic-layers/openembedded-layer/recipes-support/opencv/files/0003-Supply-firmware-at-build-time.patch?id=649456856dc38086e7d43d740d68198e3395b067 but I've reworked it competely to simplify this mess :) Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: Add ISSL license for the firmware filesMartin Jansa2019-10-251-2/+5
| | | | | Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: return support for VPUMartin Jansa2019-10-251-2/+21
| | | | | | | | | * add PACKAGECONFIG for vpu * add extra package for firmware files * tested on rpi4 with NCS2 Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: install extension headers in includedir instead of ↵Martin Jansa2019-10-241-1/+1
| | | | | | | | | share/doc * otherwise components depending on them won't be able to find them Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: add SSTATE_SCAN_FILES to fix CMake filesMartin Jansa2019-10-241-0/+4
| | | | | Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: build clDNN against opencl-icd-loaderChin Huat Ang2019-10-211-2/+1
| | | | | | | | | | | Instead of letting clDNN build against intel_ocl_icd prebuilt binaries under clDNN/common/intel_ocl_icd, configure cmake build to pick up opencl-icd-loader headers and libraries from staging directory. Do not set CMAKE_INSTALL_LOCAL_ONLY as it is unused. Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: update 2019r2 -> 2019r3Chin Huat Ang2019-10-219-246/+215
| | | | | | | Refresh patches so that they apply cleanly on 2019r3. Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: disable VPU pluginsChin Huat Ang2019-10-211-10/+1
| | | | | | | VPU plugins are untested, temporarily disable them. Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: remove DEPENDS mkl-dnnChin Huat Ang2019-10-211-1/+0
| | | | | | | | Inference engine is still downloading and building it's own copy of mkl-dnn, so remove it from DEPENDS. Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: fix ptest failuresChin Huat Ang2019-10-213-0/+34
| | | | | | | | | | | | | | | | | | | | | | | Package libmock_engine.so as part of dldt-inference-engine-ptest and update run-ptest to set LD_LIBRARY_PATH to fix the following InferenceEngineUnitTests failures: FAIL: 12 tests, listed below: FAIL: PluginTest.canCreatePlugin FAIL: PluginTest.canCreatePluginUsingSmartPtr FAIL: PluginTest.shouldThrowExceptionIfPluginNotExist FAIL: PluginTest.canCallErrorHandlerIfNecessary FAIL: PluginTest.canForwardPluginEnginePtr FAIL: PluginTest.canSetConfiguration FAIL: PluginDispatcherTests.canLoadMockPlugin FAIL: PluginDispatcherTests.returnsIfLoadSuccessfull FAIL: SharedObjectLoaderTests.canLoadExistedPlugin FAIL: SharedObjectLoaderTests.canFindExistedMethod FAIL: SharedObjectLoaderTests.throwIfMethodNofFoundInLibrary FAIL: SharedObjectLoaderTests.canCallExistedMethod Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* lms: Do not build on muslKhem Raj2019-10-101-0/+2
| | | | | | | It depends on ace which is marked as incompatible for musl as well Signed-off-by: Khem Raj <raj.khem@gmail.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: add PACKAGECONFIG for python APIChin Huat Ang2019-10-101-5/+11
| | | | | | | | | | | | Add PACKAGECONFIG[python3] for building dldt-inference-engine-python3 package which contains the inference engine python API. Also tweak recipe to inherit python3native instead of relying on host python as building the python API requires python3-cython which might not be available on the host. Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: fix clDNN install directoryChin Huat Ang2019-09-302-0/+28
| | | | | | | | | | | | | | | | Install clDNN to /usr/lib to resolve the following inference engine error when running with GPU plugin: [ ERROR ] Failed to create plugin libclDNNPlugin.so for device GPU Please, check your environment Cannot load library 'libclDNNPlugin.so': libclDNNPlugin.so: cannot open shared object file: No such file or directory /usr/src/debug/dldt-inference-engine/2019r2-r0/git/inference-engine/include/details/os/lin_shared_object_loader.h:36 /usr/src/debug/dldt-inference-engine/2019r2-r0/git/inference-engine/src/inference_engine/ie_core.cpp:277 Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: upgrade 2019r1.1 -> 2019r2Anuj Mittal2019-09-3014-404/+751
| | | | | | | | | | | | | | | | * Release notes: https://software.intel.com/en-us/articles/OpenVINO-RelNotes * Enable unit tests to be built and tested using ptest mechanism. * Include patches from Clear Linux for build fixes. * Switch to using python3 and threading to using TBB. Switch ENABLE_OPENCV to off so opencv from system is used. * Remove do_install and patch Makefiles instead to install libraries correctly. Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* dldt-inference-engine: add recipeChin Huat Ang2019-09-288-0/+498
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This recipe builds the inference engine from opencv/dldt 2019 R1.1 release. OpenVINO™ toolkit, short for Open Visual Inference and Neural network Optimization toolkit, provides developers with improved neural network performance on a variety of Intel® processors and helps further unlock cost-effective, real-time vision applications. The toolkit enables deep learning inference and easy heterogeneous execution across multiple Intel® platforms (CPU, Intel® Processor Graphics)—providing implementations across cloud architectures to edge device. For more details, see: https://01.org/openvinotoolkit The recipe needs components from meta-oe so move it to dynamic-layers/openembedded-layer. GPU plugin support needs intel-compute-runtime which can be built by including clang layer in the mix as well. CPU and GPU plugins have been sanity tested to work using classification_sample. Further fine-tuning is still needed to improve the performance. Original patch by Anuj Mittal. Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* lms: upgrade 1921.0.0.0 -> 1932.0.0.0Alexander Usyskin2019-08-072-11/+6
| | | | | | | Drop library packaging workarounds - not needed with a new sources. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* lms: add recipe for lms 1921.0.0.0Alexander Usyskin2019-06-192-0/+70
This is a new release of Local Manageability Service. This open-source release deprecates unsupported lms7 and lms8. This recipe depends on ACE and MeTee library recipes. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>