On Mon, Nov 10, 2025 at 11:18:02AM +0000, James Clark wrote:
[...]
@@ -399,13 +399,50 @@ int coresight_enable_source(struct coresight_device *csdev, struct perf_event *event, enum cs_mode mode, struct coresight_path *path) {
- return source_ops(csdev)->enable(csdev, event, mode, path);
- int ret;
- /*
* Record the path in the source device. The path pointer is first* assigned, followed by transitioning from DISABLED mode to an enabled* state on the target CPU. Conversely, during the disable flow, the* device mode is set to DISABLED before the path pointer is cleared.** This ordering ensures the path pointer to be safely access under the* following race condition:** CPU(a) CPU(b)** coresight_enable_source()* STORE source->path;* smp_mb();* source_ops(csdev)->enable();* `-> etm4_enable_sysfs_smp_call()* STORE source->mode;** This sequence ensures that accessing the path pointer is safe when* the device is in enabled mode.Doesn't that only work if you meticulously use READ_ONCE() for accessing path on the read side? Which doesn't look like it has been done.
As Suzuki suggested, I will move the path setting onto the target CPU (along with the mode setting). Since the coresight mode functions already have applied memory barriers, we might can ensure the data sequence between mode and path if all run on the target CPU, no need to introduce new barriers.
I'm not sure why path is special though, there are plenty of variables in csdev that are accessed while the device is active. Shouldn't path be covered by the existing locks in the same way? It would be much safer and easier to understand if it was.
Unlike other fields in csdev, path can be retrieved on SMP cores, and we don't want to introduce race condition between low level's CPU idle and high level's sysfs knobs, we don't use spinlock to protect it.
We relies on state machine (device's mode) for safely access the path pointer.
Thanks, Leo