Good afternoon,
I would like to announce the breakthrough in extending gdb with non
intrusive instructions and functions tracing on ARM processors using
Coresight ETM traces, as described in previous mails.
the source code is made public in the git repository
https://github.com/gzied/binutils-gdb/ in the branch gdb_arm_coresight.
this implementation was compiled and tested on an STM32MP1 (ARMv7
Architecture) board running "Linux arm 5.3.10-armv7-lpae-x15" distribution
where the device tree was modified to declare CoreSight components.
to build the software
>checkout the software from the branch gdb_arm_coresight
> cd ..
>mkdir build
>cd build
> ../binutils-gdb/configure --with-arm-cs
>make
after a successful build gdb will be available in the folder build/gdb
to run the software
build the software you would like to debug with debugging info (-g flag)
>gcc -g software.c -o software
start debugging your software
>gdb software
set a breakpoint at main
(gdb) b main
(gdb) run
set a breakpoint in a location after the code you would like to trace
(gdb) b my_function
set your etm sink, sinks can be found by listing the folder
/sys/bus/event_source/devices/cs_etm/sinks/. use a sink that is capable of
storing the traces on chip (etf, etb ...)
(gdb) set record btrace etm sink tmc_etf0
start recording etm traces
(gdb) record btrace etm
continue the execution of the program
(gdb) c
wait until the breakpoint is hit,
analyse and display the traces
- on assembly level
(gdb) record instruction-history
- on c level
(gdb) record function-call-history /ilc
you can increase the number of instructions displayed by using
(gdb) set record instruction-history-size size
(gdb) set record function-call-history-size size
the work is still in an early stage and needs to be improved, extended and
stabilized. your feedback and contributions are welcome
Kind Regards
Zied Guermazi
CTIs are defined in the device tree and associated with other CoreSight
devices. The core CoreSight code has been modified to enable the registration
of the CTI devices on the same bus as the other CoreSight components,
but as these are not actually trace generation / capture devices, they
are not part of the Coresight path when generating trace.
However, the definition of the standard CoreSight device has been extended
to include a reference to an associated CTI device, and the enable / disable
trace path operations will auto enable/disable any associated CTI devices at
the same time.
Programming is at present via sysfs - a full API is provided to utilise the
hardware capabilities. As CTI devices are unprogrammed by default, the auto
enable describe above will have no effect until explicit programming takes
place.
A set of device tree bindings specific to the CTI topology has been defined.
The driver accesses these in a platform agnostic manner, so ACPI bindings
can be added later, once they have been agreed and defined for the CTI device.
Documentation has been updated to describe both the CTI hardware, its use and
programming in sysfs, and the new dts bindings required.
Tested on DB410 board and Juno board, against the Linux 5.6-rc3 tree.
Changes since v9:
1) Removed 2 unneeded devm_kstrdup calls, fixed error check on another.
2) Fixed variable array declaration from [0] to [].
Changes since v8:
1) Use devm_ allocation in cti_match_fixup_csdev() to match other allocations.
2) Minor comment update per request.
Changes since v7:
NB: No functional driver changes in this set. Full set released for
consistency, completeness and ease of use.
1) Updates to device tree bindings .yaml following comments from Rob Herring.
Adds #size-cells and #address-cells to properties and constrained as
required. Validated using dt_binding_check.
2) Minor typo fixes to cti documentation file.
Changes since v6:
NB: No functional driver changes in this set. Full set released for
consistency, completeness and ease of use.
1) Updates to .yaml following comments from Maxime Ripard. Correct child node
descriptions, fix validation, and ensure reg entries required in child
nodes as per DeviceTree specification.
2) Update to Juno bindings to implement reg entry specification requirements.
Changes since v5:
1) Fixed up device tree .yaml file. Using extra compatible string for
v8 architecture CTI connections.
2) Ensure association code respects coresight mutex when setting cross
referenced pointers. Add in shutdown code.
3) Multiple minor code fixes & rationalisation.
Changes since v4:
Multiple changes following feedback from Mathieu, Leo and Suzuki.
1) Dropped RFC tag - wider distribution
2) CTI bindings definition now presented as a .yaml file - tested with
with 'dt-doc-validate' from devicetree.org/dt-schema project and in kernel
build tree with 'make dtbs_check' per kernel docs.
3) Sysfs links to other CoreSight devices moved out of this set into
a following set that deals with all CoreSight devices & sysfs links.
4) Documentation in .rst format and new directory following patchset in [1].
Extended example provided in docs.
5) Rationalised devicetree of_ specifics to use generic fwnode functions
where possible to enable easier addition of ACPI support later.
6) Other minor changes as requested in feedback from last patchset.
Changes since v3:
1) After discussion on CS mailing list, each CTI connection has a triggers<N>
sysfs directory with name and trigger signals listed for the connection.
2) Initial code for creating sysfs links between CoreSight components is
introduced and implementation for CTI provided. This allows exploration
of the CoreSight topology within the sysfs infrastructure. Patches for
links between other CoreSight components to follow.
3) Power management - CPU hotplug and idle omitted from this set as ongoing
developments may define required direction. Additional patch set to follow.
4) Multiple fixes applied as requested by reviewers esp. Matthieu, Suzuki
and Leo.
Changes since v2:
Updates to allow for new features on coresight/next and feedback from
Mathieu and Leo.
1) Rebase and restructuring to apply on top of ACPI support patch set,
currently on coresight/next. of_coresight_cti has been renamed to
coresight-cti-platform and device tree bindings added to this but accessed
in a platform agnostic manner using fwnode for later ACPI support
to be added.
2) Split the sysfs patch info a series of functional patches.
3) Revised the refcount and enabling support.
4) Adopted the generic naming protocol - CTIs are either cti_cpuN or
cti_sysM
5) Various minor presentation /checkpatch issues highlighted in feedback.
6) revised CPU hotplug to cover missing cases needed by ETM.
Changes since v1:
1) Significant restructuring of the source code. Adds cti-sysfs file and
cti device tree file. Patches add per feature rather than per source
file.
2) CPU type power event handling for hotplug moved to CoreSight core,
with generic registration interface provided for all CPU bound CS devices
to use.
3) CTI signal interconnection details in sysfs now generated dynamically
from connection lists in driver. This to fix issue with multi-line sysfs
output in previous version.
4) Full device tree bindings for DB410 and Juno provided (to the extent
that CTI information is available).
5) AMBA driver update for UCI IDs are now upstream so no longer included
in this set
Mike Leach (15):
coresight: cti: Initial CoreSight CTI Driver
coresight: cti: Add sysfs coresight mgmt reg access
coresight: cti: Add sysfs access to program function regs
coresight: cti: Add sysfs trigger / channel programming API
dt-bindings: arm: Adds CoreSight CTI hardware definitions
coresight: cti: Add device tree support for v8 arch CTI
coresight: cti: Add device tree support for custom CTI
coresight: cti: Enable CTI associated with devices
coresight: cti: Add connection information to sysfs
dt-bindings: qcom: Add CTI options for qcom msm8916
dt-bindings: arm: Juno platform - add CTI entries to device tree
dt-bindings: hisilicon: Add CTI bindings for hi-6220
docs: coresight: Update documentation for CoreSight to cover CTI
docs: sysfs: coresight: Add sysfs ABI documentation for CTI
Update MAINTAINERS to add reviewer for CoreSight
.../testing/sysfs-bus-coresight-devices-cti | 221 ++++
.../bindings/arm/coresight-cti.yaml | 336 +++++
.../devicetree/bindings/arm/coresight.txt | 7 +
.../trace/coresight/coresight-ect.rst | 211 +++
Documentation/trace/coresight/coresight.rst | 13 +
MAINTAINERS | 3 +
arch/arm64/boot/dts/arm/juno-base.dtsi | 162 ++-
arch/arm64/boot/dts/arm/juno-cs-r1r2.dtsi | 37 +-
arch/arm64/boot/dts/arm/juno-r1.dts | 25 +
arch/arm64/boot/dts/arm/juno-r2.dts | 25 +
arch/arm64/boot/dts/arm/juno.dts | 25 +
.../boot/dts/hisilicon/hi6220-coresight.dtsi | 130 +-
arch/arm64/boot/dts/qcom/msm8916.dtsi | 85 +-
drivers/hwtracing/coresight/Kconfig | 21 +
drivers/hwtracing/coresight/Makefile | 3 +
.../coresight/coresight-cti-platform.c | 485 +++++++
.../hwtracing/coresight/coresight-cti-sysfs.c | 1175 +++++++++++++++++
drivers/hwtracing/coresight/coresight-cti.c | 745 +++++++++++
drivers/hwtracing/coresight/coresight-cti.h | 235 ++++
.../hwtracing/coresight/coresight-platform.c | 20 +
drivers/hwtracing/coresight/coresight-priv.h | 15 +
drivers/hwtracing/coresight/coresight.c | 86 +-
include/dt-bindings/arm/coresight-cti-dt.h | 37 +
include/linux/coresight.h | 27 +
24 files changed, 4098 insertions(+), 31 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-bus-coresight-devices-cti
create mode 100644 Documentation/devicetree/bindings/arm/coresight-cti.yaml
create mode 100644 Documentation/trace/coresight/coresight-ect.rst
create mode 100644 drivers/hwtracing/coresight/coresight-cti-platform.c
create mode 100644 drivers/hwtracing/coresight/coresight-cti-sysfs.c
create mode 100644 drivers/hwtracing/coresight/coresight-cti.c
create mode 100644 drivers/hwtracing/coresight/coresight-cti.h
create mode 100644 include/dt-bindings/arm/coresight-cti-dt.h
--
2.17.1
The connections between CoreSight sources, links and sinks is not obvious
without documentation or access to the device tree / ACPI definitions for
the platform.
This patchset provides sysfs links to enable the user to follow the trace
patch from source to sink.
Components in the trace path are updated to have a connections sysfs
group, which collates all the links for that component.
The CTI components which exist aside from the main trace patch, also
have an added connections directory showing connections to other
CoreSight devices.
This patchset applies on top of the recent CTI v9 patchset [1].
Adaptation of an original patchset [2] from Suzuki, reusing 2 patches
unchanged with update to 3rd adapt to the new common code for trace
path and CTI component links & add a default connections group.
Tested on Juno r1, DB410c; kernel 5.6-rc1
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2020-February/709923.…
[2] https://lists.linaro.org/pipermail/coresight/2019-May/002803.html
Changes since v3:
1) Rebased onto 5.6-rc1 kernel with CTI set[1].
Changes since v2:
1) Fixed issues with signature ordering noted by Suzuki.
2) Alterations to main CTI set[1] to overcome issue noted by Matthieu.
Changes since v1:
1) Code from original v4 CTI set moved here so that all connections related
code in this set.
2) Connections directory mandatory for all CoreSight components and
generated as part of the registration process.
Mike Leach (3):
coresight: Add generic sysfs link creation functions.
coresight: cti: Add in sysfs links to other coresight devices.
coresight: docs: Add information about the topology representations.
Suzuki K Poulose (3):
coresight: Pass coresight_device for coresight_release_platform_data
coresight: add return value for fixup connections
coresight: Expose device connections via sysfs
.../trace/coresight/coresight-ect.rst | 5 +-
Documentation/trace/coresight/coresight.rst | 85 ++++++++
drivers/hwtracing/coresight/Makefile | 3 +-
drivers/hwtracing/coresight/coresight-cti.c | 41 +++-
.../hwtracing/coresight/coresight-platform.c | 2 +-
drivers/hwtracing/coresight/coresight-priv.h | 12 +-
drivers/hwtracing/coresight/coresight-sysfs.c | 204 ++++++++++++++++++
drivers/hwtracing/coresight/coresight.c | 75 ++++---
include/linux/coresight.h | 22 ++
9 files changed, 420 insertions(+), 29 deletions(-)
create mode 100644 drivers/hwtracing/coresight/coresight-sysfs.c
--
2.17.1
CTIs are defined in the device tree and associated with other CoreSight
devices. The core CoreSight code has been modified to enable the registration
of the CTI devices on the same bus as the other CoreSight components,
but as these are not actually trace generation / capture devices, they
are not part of the Coresight path when generating trace.
However, the definition of the standard CoreSight device has been extended
to include a reference to an associated CTI device, and the enable / disable
trace path operations will auto enable/disable any associated CTI devices at
the same time.
Programming is at present via sysfs - a full API is provided to utilise the
hardware capabilities. As CTI devices are unprogrammed by default, the auto
enable describe above will have no effect until explicit programming takes
place.
A set of device tree bindings specific to the CTI topology has been defined.
The driver accesses these in a platform agnostic manner, so ACPI bindings
can be added later, once they have been agreed and defined for the CTI device.
Documentation has been updated to describe both the CTI hardware, its use and
programming in sysfs, and the new dts bindings required.
Tested on DB410 board and Juno board, against the Linux 5.6-rc1, 5.5 trees.
Changes since v8:
1) Use devm_ allocation in cti_match_fixup_csdev() to match other allocations.
2) Minor comment update per request.
Changes since v7:
NB: No functional driver changes in this set. Full set released for
consistency, completeness and ease of use.
1) Updates to device tree bindings .yaml following comments from Rob Herring.
Adds #size-cells and #address-cells to properties and constrained as
required. Validated using dt_binding_check.
2) Minor typo fixes to cti documentation file.
Changes since v6:
NB: No functional driver changes in this set. Full set released for
consistency, completeness and ease of use.
1) Updates to .yaml following comments from Maxime Ripard. Correct child node
descriptions, fix validation, and ensure reg entries required in child
nodes as per DeviceTree specification.
2) Update to Juno bindings to implement reg entry specification requirements.
Changes since v5:
1) Fixed up device tree .yaml file. Using extra compatible string for
v8 architecture CTI connections.
2) Ensure association code respects coresight mutex when setting cross
referenced pointers. Add in shutdown code.
3) Multiple minor code fixes & rationalisation.
Changes since v4:
Multiple changes following feedback from Mathieu, Leo and Suzuki.
1) Dropped RFC tag - wider distribution
2) CTI bindings definition now presented as a .yaml file - tested with
with 'dt-doc-validate' from devicetree.org/dt-schema project and in kernel
build tree with 'make dtbs_check' per kernel docs.
3) Sysfs links to other CoreSight devices moved out of this set into
a following set that deals with all CoreSight devices & sysfs links.
4) Documentation in .rst format and new directory following patchset in [1].
Extended example provided in docs.
5) Rationalised devicetree of_ specifics to use generic fwnode functions
where possible to enable easier addition of ACPI support later.
6) Other minor changes as requested in feedback from last patchset.
Changes since v3:
1) After discussion on CS mailing list, each CTI connection has a triggers<N>
sysfs directory with name and trigger signals listed for the connection.
2) Initial code for creating sysfs links between CoreSight components is
introduced and implementation for CTI provided. This allows exploration
of the CoreSight topology within the sysfs infrastructure. Patches for
links between other CoreSight components to follow.
3) Power management - CPU hotplug and idle omitted from this set as ongoing
developments may define required direction. Additional patch set to follow.
4) Multiple fixes applied as requested by reviewers esp. Matthieu, Suzuki
and Leo.
Changes since v2:
Updates to allow for new features on coresight/next and feedback from
Mathieu and Leo.
1) Rebase and restructuring to apply on top of ACPI support patch set,
currently on coresight/next. of_coresight_cti has been renamed to
coresight-cti-platform and device tree bindings added to this but accessed
in a platform agnostic manner using fwnode for later ACPI support
to be added.
2) Split the sysfs patch info a series of functional patches.
3) Revised the refcount and enabling support.
4) Adopted the generic naming protocol - CTIs are either cti_cpuN or
cti_sysM
5) Various minor presentation /checkpatch issues highlighted in feedback.
6) revised CPU hotplug to cover missing cases needed by ETM.
Changes since v1:
1) Significant restructuring of the source code. Adds cti-sysfs file and
cti device tree file. Patches add per feature rather than per source
file.
2) CPU type power event handling for hotplug moved to CoreSight core,
with generic registration interface provided for all CPU bound CS devices
to use.
3) CTI signal interconnection details in sysfs now generated dynamically
from connection lists in driver. This to fix issue with multi-line sysfs
output in previous version.
4) Full device tree bindings for DB410 and Juno provided (to the extent
that CTI information is available).
5) AMBA driver update for UCI IDs are now upstream so no longer included
in this set
Mike Leach (15):
coresight: cti: Initial CoreSight CTI Driver
coresight: cti: Add sysfs coresight mgmt reg access.
coresight: cti: Add sysfs access to program function regs
coresight: cti: Add sysfs trigger / channel programming API
dt-bindings: arm: Adds CoreSight CTI hardware definitions.
coresight: cti: Add device tree support for v8 arch CTI
coresight: cti: Add device tree support for custom CTI.
coresight: cti: Enable CTI associated with devices.
coresight: cti: Add connection information to sysfs
dt-bindings: qcom: Add CTI options for qcom msm8916
dt-bindings: arm: Juno platform - add CTI entries to device tree.
dt-bindings: hisilicon: Add CTI bindings for hi-6220
docs: coresight: Update documentation for CoreSight to cover CTI.
docs: sysfs: coresight: Add sysfs ABI documentation for CTI
Update MAINTAINERS to add reviewer for CoreSight.
.../testing/sysfs-bus-coresight-devices-cti | 221 ++++
.../bindings/arm/coresight-cti.yaml | 336 +++++
.../devicetree/bindings/arm/coresight.txt | 7 +
.../trace/coresight/coresight-ect.rst | 211 +++
Documentation/trace/coresight/coresight.rst | 13 +
MAINTAINERS | 3 +
arch/arm64/boot/dts/arm/juno-base.dtsi | 162 ++-
arch/arm64/boot/dts/arm/juno-cs-r1r2.dtsi | 37 +-
arch/arm64/boot/dts/arm/juno-r1.dts | 25 +
arch/arm64/boot/dts/arm/juno-r2.dts | 25 +
arch/arm64/boot/dts/arm/juno.dts | 25 +
.../boot/dts/hisilicon/hi6220-coresight.dtsi | 130 +-
arch/arm64/boot/dts/qcom/msm8916.dtsi | 85 +-
drivers/hwtracing/coresight/Kconfig | 21 +
drivers/hwtracing/coresight/Makefile | 3 +
.../coresight/coresight-cti-platform.c | 485 +++++++
.../hwtracing/coresight/coresight-cti-sysfs.c | 1175 +++++++++++++++++
drivers/hwtracing/coresight/coresight-cti.c | 748 +++++++++++
drivers/hwtracing/coresight/coresight-cti.h | 235 ++++
.../hwtracing/coresight/coresight-platform.c | 20 +
drivers/hwtracing/coresight/coresight-priv.h | 15 +
drivers/hwtracing/coresight/coresight.c | 86 +-
include/dt-bindings/arm/coresight-cti-dt.h | 37 +
include/linux/coresight.h | 27 +
24 files changed, 4101 insertions(+), 31 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-bus-coresight-devices-cti
create mode 100644 Documentation/devicetree/bindings/arm/coresight-cti.yaml
create mode 100644 Documentation/trace/coresight/coresight-ect.rst
create mode 100644 drivers/hwtracing/coresight/coresight-cti-platform.c
create mode 100644 drivers/hwtracing/coresight/coresight-cti-sysfs.c
create mode 100644 drivers/hwtracing/coresight/coresight-cti.c
create mode 100644 drivers/hwtracing/coresight/coresight-cti.h
create mode 100644 include/dt-bindings/arm/coresight-cti-dt.h
--
2.17.1
Bonjour,
Notre outil vous fournit les meilleures solutions télématiques sur le marché.
Nous vous proposons un système de surveillance de véhicule innovant comprenant davantage de fonctionnalités et qui satisfait près de l’ensemble des besoins de gestion de flotte existants.
Bénéficiez d’une période d'essai gratuite de 3 mois et explorez sans aucun risque l’étendue des fonctionnalités de notre outil.
Souhaitez-vous gérer plus efficacement vos voitures et autres véhicules de société ?
Cordialement,
Madeleine Durand
Hi,
I would like to know if you are interested in acquiring Linux Users Accounts
for your marketing initiatives.
We can even help you with: IBM z/VSE Operating System, Windows XP, Arm Mbed
OS, Junos OS, Blackberry 7 and many more...
Please share your thoughts, I will get back to you with the counts and
pricing for your review.
Best Regards,
Amanda Butler| Demand Generation
To opt out, please reply with 'Leave Out' in the Subject Line.
Same comment applies with regards to mailing lists.
On Mon, 17 Feb 2020 at 10:16, zied guermazi <guermazi_zied(a)yahoo.com> wrote:
>
> Hi Mike, hi Matthieu,
>
> I was too rapid to press the send button, sorry for the inconvienience.
> and yes, it works. I published the source code in github (https://github.com/gzied/binutils-gdb/tree/gdb_arm_coresight) ,as well as a mail in linaro mailing list (https://lists.linaro.org/pipermail/coresight/2020-February/003676.html).
Unfortunately you didn't publish your work in "PATCH" format and as
such it can't be reviewed.
> Eventhough there is still an effort needed to stabilize it for armv7, and test it for armv8, I consider this as a breakthough that we can build on the top of it.
> I will be visiting Embedded world exhibition between 25th and 27th of February in Nuremberg, Germany. are you planning to be there too. I am looking forwards to meet you there if possible to discuss possibilities of further development and integration in the main stream.
I won't be in Nuremberg next week and I'm pretty sure Mike won't be
there either. Other people on the lists might be around and
interested in meeting with you. Mike and I will be attending Linaro
Connect in Budapest between the 23rd and 27th of March. The Linaro
toolchain group will also be there so it might be the perfect
opportunity to sit down for an hour or so and have a chat.
A good day to you,
Mathieu
>
> Kind Regards
> Zied Guermazi
>
>
>
>
>
> On Monday, February 17, 2020, 06:08:26 PM GMT+1, zied guermazi <guermazi_zied(a)yahoo.com> wrote:
>
>
> Hi Mike, hi Matthieu
> it works!
>
>
>
> On Monday, December 16, 2019, 11:20:20 PM GMT+1, zied guermazi <guermazi_zied(a)yahoo.com> wrote:
>
>
> hi Mike, hi Mathieu
> last few weeks I was able to spend some time on implementing this feature, and I want to share the status with you and get your recommendation on organizational and technical level.
> so far I was able to use perf events to configure the drivers for etm and collect the traces. the code was tested successfully on an STM32MP157A discovery kit (arm v7).
> I would like to push this on git, first for review and second for integration and creating traction . Do you think it is ok to push it in this status (feature partially achieved) ? is linaro gdb git the correct one? who is the maintainer of this part of gdb? I do not have an ARMv8 platform to test. who can support here?
>
> during the implementation few technical question raised:
> - etm tracing collection requires selecting a trace sink. and I have two alternatives: either extending the "record btrace etm" command (used to start tracing) with a sink as an argument or extending the command "set record btrace" (general configuration of tracing) with etm sink sub-command? (I have hard-coded it currently to be able to progress)
> - currently I am hardcoding the path "/sys/bus/event_source/devices/cs_etm/" as the default etm source, is this guaranteed to be unique? can we have a system with many etm sources enumerated in the sysfs.
>
> for parsing the traces I will need some configuration parameters like the content of registers ETMCR, ETMIDR, ETMCCER, ETMTRACEIDR. if my understanding of the implementation of perf is correct, it is collecting them from the sysfs files located in /sys/bus/event_source/devices/cs_etm/cpu0/mgmt/. which are global for the system. my question is what will happen if two process are requesting tracing at the same time? how to differentiate between traces going to one session from the second one? is it possible to get the parameters of a given session by some kind of ioctl request to the file descriptor we get out of sys_perf_open call?
>
> I will publish those queries in the linaro coresight and gdb forums, I wanted first to align with you especially on the organizational issues, before going to a bigger round.
>
> Thanks for your support
> Zied Guermazi
>
>
Hi all,
This regarding to the plan to build coresight drivers as Kernel module.
Currently, it doesn't support that. I noticed there's some patch to add
the ability to build as module but that patch doesn't end to be merged.
I'd like to know whether anyone is working on this or has plan for this.
I'm interesting in this since there's some trend on mobile device
especially Android device that all drivers should be built as Kernel
modules. Coresight drivers are important for mobile device to debug
SW/HW issues.
Thanks,
Tingwei
This patch series is to address issues for synthesizing instruction
samples, especially when the instruction sample period is small enough,
the current logic cannot synthesize multiple instruction samples within
one instruction range packet.
Patch 0001 is to swap packets for instruction samples, so this allow
option '--itrace=iNNN' can work well.
Patch 0002 avoids to reset the last branches for every instruction
sample; if reset the last branches for every time generating sample, the
later samples in the same range packet cannot use the last branches
anymore.
Patch 0003 is the fixing for handling different instruction periods,
especially for small sample period.
Patch 0004 is an optimization for copying last branches; it only copies
last branches once if the instruction samples share the same last
branches.
Patch 0005 is a minor fix for unsigned variable comparison to zero.
This patch set has been rebased on the latest perf/core branch; and
verified on Juno board with below commands:
# perf script --itrace=i2
# perf script --itrace=i2il16
# perf inject --itrace=i2il16 -i perf.data -o perf.data.new
# perf inject --itrace=i100il16 -i perf.data -o perf.data.new
Changes from v3:
* Refactored patch 0001 with new function cs_etm__packet_swap() (Mike);
* Refined instruction sample generation flow with single while loop,
which completely uses Mike's suggestions (Mike);
* Added Mike's review tags for patch 01/02/04/05.
Changes from v2:
* Added patch 0001 which is to fix swapping packets for instruction
samples;
* Refined minor commit logs and comments;
* Rebased on the latest perf/core branch.
Changes from v1:
* Rebased patch set on perf/core branch with latest commit 9fec3cd5fa4a
("perf map: Check if the map still has some refcounts on exit").
Leo Yan (5):
perf cs-etm: Swap packets for instruction samples
perf cs-etm: Continuously record last branch
perf cs-etm: Correct synthesizing instruction samples
perf cs-etm: Optimize copying last branches
perf cs-etm: Fix unsigned variable comparison to zero
tools/perf/util/cs-etm.c | 157 +++++++++++++++++++++++++++------------
1 file changed, 111 insertions(+), 46 deletions(-)
--
2.17.1
i
Hi Zied,
As requested before, always add the CS mailing list when interacting
with us. There is a fair amount of people on there that would surely
be interested in this work. I also CC'd the linaro-toolchain group
since some of the content below is related to what they do.
On Mon, 17 Feb 2020 at 10:08, zied guermazi <guermazi_zied(a)yahoo.com> wrote:
>
> Hi Mike, hi Matthieu
> it works!
>
>
>
> On Monday, December 16, 2019, 11:20:20 PM GMT+1, zied guermazi <guermazi_zied(a)yahoo.com> wrote:
>
>
> hi Mike, hi Mathieu
> last few weeks I was able to spend some time on implementing this feature, and I want to share the status with you and get your recommendation on organizational and technical level.
> so far I was able to use perf events to configure the drivers for etm and collect the traces. the code was tested successfully on an STM32MP157A discovery kit (arm v7).
> I would like to push this on git, first for review and second for integration and creating traction . Do you think it is ok to push it in this status (feature partially achieved) ? is linaro gdb git the correct one? who is the maintainer of this part of gdb? I do not have an ARMv8 platform to test. who can support here?
I will address your questions one by one:
Q: Do you think it is ok to push it in this status (feature partially
achieved) ?
A: That depends on how advanced things are. If it is stable (i.e it
does something) and can be used as a starting point for other people
to work on, then there is probably value in publishing your work.
Q: is linaro gdb git the correct one?
A: I see from your other email that you've already published your work.
Q: who is the maintainer of this part of gdb?
A: I'm sure the GDB project has documentation and a well defined
process that would identify who you should submit your code to.
Q: I do not have an ARMv8 platform to test. who can support here?
A: No doubt you'd get a lot more interest if you were working on V8.
I suggest purchasing a dragonboard 410c - they are super cheap, well
supported and one of our CS reference platforms. I think we talked
about that before...
>
> during the implementation few technical question raised:
> - etm tracing collection requires selecting a trace sink. and I have two alternatives: either extending the "record btrace etm" command (used to start tracing) with a sink as an argument or extending the command "set record btrace" (general configuration of tracing) with etm sink sub-command? (I have hard-coded it currently to be able to progress)
I would assume this "set record btrace" command has an effect on the
current session only. If so I would go for the latter option.
> - currently I am hardcoding the path "/sys/bus/event_source/devices/cs_etm/" as the default etm source, is this guaranteed to be unique? can we have a system with many etm sources enumerated in the sysfs.
>
I am very confused by this question. Yes, the path
"/sys/bus/event_source/devices/cs_etm/" is guaranteed to be unique but
it is by no means a "default source". Is is simply a directory used
by the perf tools to have more details on the CS specifics for the
running platform. Nowadays it is fair to assume there is one ETM perf
CPU, all enumerated under sysfs. Also keep in mind that processes are
being moved around by the scheduler and as such, a specific source
can't be hard coded.
> for parsing the traces I will need some configuration parameters like the content of registers ETMCR, ETMIDR, ETMCCER, ETMTRACEIDR. if my understanding of the implementation of perf is correct, it is collecting them from the sysfs files located in /sys/bus/event_source/devices/cs_etm/cpu0/mgmt/. which are global for the system. my question is what will happen if two process are requesting tracing at the same time? how to differentiate between traces going to one session from the second one? is it possible to get the parameters of a given session by some kind of ioctl request to the file descriptor we get out of sys_perf_open call?
Configuration of the tracers does indeed need to be considered. At
this time we assume all the tracers in an implementation are similar,
hence using ".../cpu0/mgmt". The information gathered from there is
related to the static configuration of the tracers. Per session
dynamic configuration is collected from the perf tools command line
and communicated to the perf infrastructure for later interpretation
by the decoding code when instantiating a decoder from the openCSD
library. Since the current framework handles only N:1 source/sink
configuration, there can only be one trace session using a sink.
There is currently no way to extract trace session configuration other
than the user space perf tools mechanic.
>
> I will publish those queries in the linaro coresight and gdb forums, I wanted first to align with you especially on the organizational issues, before going to a bigger round.
>
> Thanks for your support
> Zied Guermazi
>
>