To whom it may concern
The following components have been deprecated and will be removed in the next (21.05) release of ArmNN* armnnQuantizer Now that the Tensorflow Lite Converter (https://www.tensorflow.org/lite/convert/)
has mature post training quantization capabilities the need for this component has gone. See: https://www.tensorflow.org/model_optimization/guide/quantization/post_train… and
https://www.tensorflow.org/lite/performance/post_training_quantization for more details.* armnnTfParser As Tensorflow Lite is our current recommended deployment environment for ArmNN and
the Tensorflow Lite Converter provides a path for converting most common machine learning
models into Tensorflow Lite format the need for a Tensorflow parser has gone.* armnnCaffeParser Caffe is no longer as widely used a framework for machine learning as it once was.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi!
For PyArmNN (currently under development and planned for 20.05), we decided to move all the unit test resources such as json, npy files or models (onnx, tf, tflite, caffe) to http://snapshots.linaro.org/ so that they are not stored in the git repository and they are still publicly available.
A similar issue is there for the binaries in "tests" e.g. TfLiteMobilenetQuantized-Armnn. Yes, most of the models are available publicly, but some are either harder to find or are not available at all. Would it be viable to have all the resources required to run all the tests either on http://snapshots.linaro.org/ or https://releases.linaro.org/ ? (... and provide a download script or add it to the README)
Thanks!
Pavel
Hi all,
I would like to test the communities appetite for deprecating the Tensorflow and Caffe parsers. This would free-up some dev and test resource to focus on potentially more relevant features. The .armnn, .tflite and .onnx formats would continue to be supported and actively developed as viable alternative routes into ArmNN.
I would be interested to know if you think this move would significantly, negatively affect any known/existing workflows with ArmNN. Any thoughts or comments are welcome.
Thanks,
Derek
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
[Resending due to mail bounce]
Hi Pavel,
Thanks for your email, I've modified this response to more clearly indicate what we would like from a design pov and to more clearly indicate that much of this work is open for community contributions.
It should be fine to move PyArmNN into armnn master with a few small modifications. Ideally, I'd rather not have any generated files checked-in to the repository, however any scripts to execute the generation command can be checked-in. In order to remove the hard dependency on Tox (I think we are in agreement on this), it would be best to move the generation commands into a separate stand-alone (bash or other?) script or scripts which the Tox script then calls directly, and make the generation an optional build step in the cmake (which can also then call these same generation scripts). Users and contributors can then regenerate the bindings as needed without requiring Tox. Tox can therefore remain as a purely optional convenience for testing against multiple python versions (which we will use in our internal CI). So to answer one of your questions, at least initially, we would like PyArmNN to be introduced purely as source which can then be built for the target machine. So the work required would be as follows:
1. Refactor the a) swig code gen commands and b) subsequent python source packaging commands from Tox into stand-alone scripts which Tox then calls.
2. Add an optional build step in CMake to generate PyArmNN source files and python source package.
Does that sound reasonable?
One of my other main concerns in the short-term, is that the PyArmNN interface is kept up-to-date and working with the changing code base. We don't currently have tests in place to ensure the PyArmNN interface is a) stable and b) kept up-to-date as new interfaces are added to ArmNN. If possible, I would like to see these tests run on every check-in via the Linaro build-robot. We have not planned or scheduled any effort towards this so if you can pick this up, that would be most valuable.
Making PyArmNN available via the package managers is an aspiration we would like to work towards, but there are some things we'd like to achieve first on that path. Mainly, ArmNN (and the corresponding PyArmNN) are currently strongly bound to a particular release version of the API. I would like to make stronger API & ABI guarantees than we currently do for the ArmNN frontend. This would probably require a stable C like interface in place....similar in fact to what PyArmNN is doing now. This would enable us to introduce semantic versioning which would enable ArmNN to work better as a system library, possibly even distributed as a debian package (we are working on the debian packaging atm). I'd also like to apply the same paradigm to the backend interface, but this is a MUCH larger and scarier endeavor and I'm not satisfied that the backend API is mature/stable enough for that just yet. We haven't scheduled this yet so it is useful work that anyone can pick-up. At the very least, I'd like to start some open dialog to find out a) how important this is for the community, and b) how we can get to that point.
Making PyArmNN available as WHL, as I understand it, requires prebuilt binaries for all the potential targets, which introduces a host of headaches which we would like to avoid at the moment but this might become easier if Arm NN is required as a system library with a stable ABI. I'm keen to get other opinions on this though.
@Alexander can correct me if I'm wrong an any of this, but I believe with the source package, you can use "pip install" already. You just have to point it to the generated source package rather than using the package manager. Given the limitations with ABI stability, I think publishing it on pypi.org will introduce a constant ongoing maintenance cost which we would rather solve by getting some of the fundamentals (ie weaker version dependencies) sorted first.
Also, on the documentation, we plan to make them available via the github pages. I think Alexander and his team have done an awesome job on the PyArmNN docs.
If you have more questions or suggestions, or if any of this doesn't work for you, let us know.
Regards,
Derek
From: Matthew Bentham
Sent: 12 February 2020 13:38
To: Pavel Macenauer; armnn-dev(a)lists.linaro.org; Georgios Pinitas; Derek Lamberti; Alexander Efremov
Subject: Re: pyarmnn integration
+George, Derek, Alexander,
Please can you guys help Pavel? And keep the list on cc for visibility please.
Many thanks,
Matthew
________________________________
From: Armnn-dev <armnn-dev-bounces(a)lists.linaro.org> on behalf of Pavel Macenauer <pavel.macenauer(a)nxp.com>
Sent: 10 February 2020 21:00
To: armnn-dev(a)lists.linaro.org <armnn-dev(a)lists.linaro.org>
Subject: [Armnn-dev] pyarmnn integration
Hi!
There is a branch experimental/pyarmnn, created by Matthew Bentham, which contains python wrappers for armnn, which initially seems to work pretty well - building a whl archive works, which can be installed using pip and I was able to write an example, which runs inference on a float/quantized model and using all the supported frameworks - tf, tf-lite, caffe, onnx as well. What is missing is to get the python wrappers integrated, run and check unit tests and write a few examples. We discussed this with Matthew already, but I would be glad to hear more opinions regarding how we proceed and to kick off a discussion.
1. How to integrate pyarmnn?
There are 2 paths initially:
1. Build pyarmnn together with armnn using a single cmake command
* By default it would be turned off, otherwise it would be build using e.g. -DBUILD_PYARMNN
* The product is either a whl or a src package - so should there be 2 options e.g. -DBUILD_PYARMNN_SRC, -DBUILD_PYARMNN_WHL or only a single one, which would always build both?
2. Separate pyarmnn from armnn into a different repository (and keep it as a separate project)
* Additionally to a) options -DARMNN_LIB and -DARMNN_INCLUDE would be required as well, so that it can be "linked" against configurable armnn build
The difference is mainly in maintainability - a) forces to maintain pyarmnn and update the swig files to generate wrappers per every release b) on the other hand keeps the project separate, allows to build pyarmnn with a configurable armnn release and doesn't create a dependency to update the swig files whenever armnn interface changes a little.
1. Remove tox? Yes/No - Tox is a python automation library, which is used to generate the wrappers or to run unit tests. It is not really needed, because the wrappers can be generated directly using swig and the src/whl packages generated using python/setuptools and it just creates another dependency. Unit tests can also be run directly using python.
2. Get pyarmnn published on pypi.org? Yes/No - we would be able to install pyarmnn using "pip install pyarmnn"
Any additional ideas, comments, feedback etc. would be of course appreciated.
Thanks!
Pavel M
_______________________________________________
Armnn-dev mailing list
Armnn-dev(a)lists.linaro.org
https://lists.linaro.org/mailman/listinfo/armnn-dev
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi all,
I'd like to poll the communities interest in having a stable API/ABI with strong semantic versioning guarantees. There are two levels where this could be applied.
1. Frontend API
2. Backend API
At present, any SW using ArmNN has to be built against a specific release version and all backends used have to be built for that same specific release version of ArmNN.
Achieving (1) would allow ArmNN libraries to be installed on the system more readily and any SW using ArmNN could target a specific ArmNN API version which could be compatible with multiple release versions of ArmNN.
Achieving (2) would allow backends to be distributed separately and work with a wider array of ArmNN versions.
Any thoughts or feedback are most welcome.
Thanks,
Derek
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi!
There is a branch experimental/pyarmnn, created by Matthew Bentham, which contains python wrappers for armnn, which initially seems to work pretty well - building a whl archive works, which can be installed using pip and I was able to write an example, which runs inference on a float/quantized model and using all the supported frameworks - tf, tf-lite, caffe, onnx as well. What is missing is to get the python wrappers integrated, run and check unit tests and write a few examples. We discussed this with Matthew already, but I would be glad to hear more opinions regarding how we proceed and to kick off a discussion.
1. How to integrate pyarmnn?
There are 2 paths initially:
1. Build pyarmnn together with armnn using a single cmake command
* By default it would be turned off, otherwise it would be build using e.g. -DBUILD_PYARMNN
* The product is either a whl or a src package - so should there be 2 options e.g. -DBUILD_PYARMNN_SRC, -DBUILD_PYARMNN_WHL or only a single one, which would always build both?
2. Separate pyarmnn from armnn into a different repository (and keep it as a separate project)
* Additionally to a) options -DARMNN_LIB and -DARMNN_INCLUDE would be required as well, so that it can be "linked" against configurable armnn build
The difference is mainly in maintainability - a) forces to maintain pyarmnn and update the swig files to generate wrappers per every release b) on the other hand keeps the project separate, allows to build pyarmnn with a configurable armnn release and doesn't create a dependency to update the swig files whenever armnn interface changes a little.
1. Remove tox? Yes/No - Tox is a python automation library, which is used to generate the wrappers or to run unit tests. It is not really needed, because the wrappers can be generated directly using swig and the src/whl packages generated using python/setuptools and it just creates another dependency. Unit tests can also be run directly using python.
2. Get pyarmnn published on pypi.org? Yes/No - we would be able to install pyarmnn using "pip install pyarmnn"
Any additional ideas, comments, feedback etc. would be of course appreciated.
Thanks!
Pavel M
Hi all,
Regarding the ILayerSupport interface in ILayerSupport.hpp, most of the methods have output TensorInfos. Some of the methods (e.g. IsDetectionPostProcessSupported) don't have output infos. This caused an issue in our custom backend because we were unable to check the output tensor info and reject the layer properly. I think it should be possible to have this information for all layers. What do you think?
Thanks,
Josh
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hello Derek,
Is this issue still open..?
If open can I work on it?
On Mon, Oct 28, 2019 at 5:30 PM <armnn-dev-request(a)lists.linaro.org> wrote:
> Send Armnn-dev mailing list submissions to
> armnn-dev(a)lists.linaro.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.linaro.org/mailman/listinfo/armnn-dev
> or, via email, send a message with subject or body 'help' to
> armnn-dev-request(a)lists.linaro.org
>
> You can reach the person managing the list at
> armnn-dev-owner(a)lists.linaro.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Armnn-dev digest..."
>
>
> Today's Topics:
>
> 1. Re: ArmNN | ONXX model load issue (Derek Lamberti)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 28 Oct 2019 10:31:11 +0000
> From: Derek Lamberti <derek.lamberti(a)linaro.org>
> To: Rahul Chowdhury <rahul.c(a)pathpartnertech.com>
> Cc: Manjunath Kulkarni <manjunath.kulkarni(a)pathpartnertech.com>,
> armnn-dev(a)lists.linaro.org
> Subject: Re: [Armnn-dev] ArmNN | ONXX model load issue
> Message-ID:
> <CAPeFqV89WNV-sw20X0NB=
> tznsiMmNpw0bzj23HcgNSEmoFFmWg(a)mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi Rahul,
>
>
> ArmNN doesn't support zero dimension tensors implicitly. Often this
> can be resolved by converting the tensor to a 1D tensor with 1
> element. We have done this conversion automatically within the TfLite
> parser and this has worked for a particular use case we ran into. A
> similar solution might work for your use case too. This could be done
> within the ToTensorInfo() function in OnnxParser.cpp. If this resolves
> the issue for you I'd recommend issuing a pull request so that we can
> integrate it into master.
>
>
> Hope that helps,
> ~Derek
>
> On Thu, 22 Aug 2019 at 15:32, Rahul Chowdhury
> <rahul.c(a)pathpartnertech.com> wrote:
> >
> > Hi,
> >
> > We are using ArmNN to cross-compile a standalone C++ application on Linux
> > that loads a standard onnx model. During the model loading, we see a
> crash
> > with the below error output -
> >
> > terminate called after throwing an instance of
> > 'armnn::InvalidArgumentException'
> > what(): Tensor numDimensions must be greater than 0
> >
> > Initially we were on armnn master, and later we switched to tag v19.05,
> but
> > the error was same for both.
> >
> > Below is the code snippet to load the model -
> > armnnOnnxParser::IOnnxParserPtr parser =
> > armnnOnnxParser::IOnnxParser::Create();
> > std::cout << "\nmodel load start";
> > armnn::INetworkPtr network =
> > parser->CreateNetworkFromBinaryFile("onnx_3DDFA.onnx");
> > std::cout << "\nmodel load end";
> >
> > It crashes after printing "model load start" with the error message
> printed
> > above.
> >
> > A gdb backtrace is also provided below -
> > (gdb) r
> > Starting program:
> > /home/root/Rahul/armnn_onnx/3DDFA_ArmNN_onnx/3ddfa_armnn_onnx
> > [Thread debugging using libthread_db enabled]
> > Using host libthread_db library "/lib/libthread_db.so.1".
> >
> > terminate called after throwing an instance of
> > 'armnn::InvalidArgumentException'
> > what(): Tensor numDimensions must be greater than 0
> > model load start
> > Program received signal SIGABRT, Aborted.
> > __GI_raise (sig=sig@entry=6) at
> > /usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
> > 51 }
> > (gdb) bt
> > #0 __GI_raise (sig=sig@entry=6) at
> > /usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
> > #1 0x0000ffffbe41df00 in __GI_abort () at
> > /usr/src/debug/glibc/2.26-r0/git/stdlib/abort.c:90
> > #2 0x0000ffffbe6aa0f8 in __gnu_cxx::__verbose_terminate_handler() ()
> from
> > /usr/lib/libstdc++.so.6
> > #3 0x0000ffffbe6a7afc in ?? () from /usr/lib/libstdc++.so.6
> > #4 0x0000ffffbe6a7b50 in std::terminate() () from
> /usr/lib/libstdc++.so.6
> > #5 0x0000ffffbe6a7e20 in __cxa_throw () from /usr/lib/libstdc++.so.6
> > #6 0x0000ffffbefdad84 in armnn::TensorShape::TensorShape(unsigned int,
> > unsigned int const*) () from
> /home/root/Rahul/armnn_onnx/build/libarmnn.so
> > #7 0x0000ffffbe7e34d8 in armnnOnnxParser::(anonymous
> > namespace)::ToTensorInfo(onnx::ValueInfoProto const&) [clone
> > .constprop.493] () from
> > /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
> > #8 0x0000ffffbe7e4080 in
> >
> armnnOnnxParser::OnnxParser::SetupInfo(google::protobuf::RepeatedPtrField<onnx::ValueInfoProto>
> > const*) () from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
> > #9 0x0000ffffbe7e41ac in armnnOnnxParser::OnnxParser::LoadGraph() ()
> from
> > /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
> > #10 0x0000ffffbe7e4760 in
> > armnnOnnxParser::OnnxParser::CreateNetworkFromModel(onnx::ModelProto&) ()
> > from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
> > #11 0x0000ffffbe7e49b0 in
> > armnnOnnxParser::OnnxParser::CreateNetworkFromBinaryFile(char const*) ()
> > from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
> > #12 0x0000000000402290 in main ()
> > (gdb)
> >
> >
> > Can someone point out if we are missing something out or doing something
> > wrong? Any help or input is highly appreciated.
> >
> >
> > Regards,
> > Rahul
> >
> > --
> >
> >
> >
> >
> >
> >
> > This
> > message contains confidential information and is intended only
> > for the
> > individual(s) named. If you are not the intended
> > recipient, you are
> > notified that disclosing, copying, distributing or taking any
> > action in
> > reliance on the contents of this mail and attached file/s is strictly
> >
> > prohibited. Please notify the
> > sender immediately and delete this e-mail
> > from your system. E-mail transmission
> > cannot be guaranteed to be secured or
> > error-free as information could be
> > intercepted, corrupted, lost, destroyed,
> > arrive late or incomplete, or contain
> > viruses. The sender therefore does
> > not accept liability for any errors or
> > omissions in the contents of this
> > message, which arise as a result of e-mail
> > transmission.
> > _______________________________________________
> > Armnn-dev mailing list
> > Armnn-dev(a)lists.linaro.org
> > https://lists.linaro.org/mailman/listinfo/armnn-dev
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Armnn-dev mailing list
> Armnn-dev(a)lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/armnn-dev
>
>
> ------------------------------
>
> End of Armnn-dev Digest, Vol 8, Issue 3
> ***************************************
>
Hi,
We are using ArmNN to cross-compile a standalone C++ application on Linux
that loads a standard onnx model. During the model loading, we see a crash
with the below error output -
terminate called after throwing an instance of
'armnn::InvalidArgumentException'
what(): Tensor numDimensions must be greater than 0
Initially we were on armnn master, and later we switched to tag v19.05, but
the error was same for both.
Below is the code snippet to load the model -
armnnOnnxParser::IOnnxParserPtr parser =
armnnOnnxParser::IOnnxParser::Create();
std::cout << "\nmodel load start";
armnn::INetworkPtr network =
parser->CreateNetworkFromBinaryFile("onnx_3DDFA.onnx");
std::cout << "\nmodel load end";
It crashes after printing "model load start" with the error message printed
above.
A gdb backtrace is also provided below -
(gdb) r
Starting program:
/home/root/Rahul/armnn_onnx/3DDFA_ArmNN_onnx/3ddfa_armnn_onnx
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/libthread_db.so.1".
terminate called after throwing an instance of
'armnn::InvalidArgumentException'
what(): Tensor numDimensions must be greater than 0
model load start
Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at
/usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
51 }
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at
/usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
#1 0x0000ffffbe41df00 in __GI_abort () at
/usr/src/debug/glibc/2.26-r0/git/stdlib/abort.c:90
#2 0x0000ffffbe6aa0f8 in __gnu_cxx::__verbose_terminate_handler() () from
/usr/lib/libstdc++.so.6
#3 0x0000ffffbe6a7afc in ?? () from /usr/lib/libstdc++.so.6
#4 0x0000ffffbe6a7b50 in std::terminate() () from /usr/lib/libstdc++.so.6
#5 0x0000ffffbe6a7e20 in __cxa_throw () from /usr/lib/libstdc++.so.6
#6 0x0000ffffbefdad84 in armnn::TensorShape::TensorShape(unsigned int,
unsigned int const*) () from /home/root/Rahul/armnn_onnx/build/libarmnn.so
#7 0x0000ffffbe7e34d8 in armnnOnnxParser::(anonymous
namespace)::ToTensorInfo(onnx::ValueInfoProto const&) [clone
.constprop.493] () from
/home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#8 0x0000ffffbe7e4080 in
armnnOnnxParser::OnnxParser::SetupInfo(google::protobuf::RepeatedPtrField<onnx::ValueInfoProto>
const*) () from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#9 0x0000ffffbe7e41ac in armnnOnnxParser::OnnxParser::LoadGraph() () from
/home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#10 0x0000ffffbe7e4760 in
armnnOnnxParser::OnnxParser::CreateNetworkFromModel(onnx::ModelProto&) ()
from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#11 0x0000ffffbe7e49b0 in
armnnOnnxParser::OnnxParser::CreateNetworkFromBinaryFile(char const*) ()
from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#12 0x0000000000402290 in main ()
(gdb)
Can someone point out if we are missing something out or doing something
wrong? Any help or input is highly appreciated.
Regards,
Rahul
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
Hi,
I'm trying to send a minor patch in ArmNN for review, but I ran into some
authentication failure below for 'git review' (I added the gerrit server
with ‘git remote add gerrit https://review.mlplatform.org/ml/armnn’).
remote: Unauthorized
fatal: Authentication failed for 'https://review.mlplatform.org/ml/armnn/'
I can login to the gerrit server with the same username/password. Is there
any special permission required? I cannot find related information in
mlplatform.org website.
Please let me know if I missed something.
Thanks,
Jammy