- Mar 26, 2019
-
-
On updates of task group (TG) clamp values, ensure that these new values are enforced on all RUNNABLE tasks of the task group, i.e. all RUNNABLE tasks are immediately boosted and/or clamped as requested. Do that by slightly refactoring uclamp_bucket_inc(). An additional parameter *cgroup_subsys_state (css) is used to walk the list of tasks in the TGs and update the RUNNABLE ones. Do that by taking the rq lock for each task, the same mechanism used for cpu affinity masks updates. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org>
-
When a task specific clamp value is configured via sched_setattr(2), this value is accounted in the corresponding clamp bucket every time the task is {en,de}qeued. However, when cgroups are also in use, the task specific clamp values could be restricted by the task_group (TG) clamp values. Update uclamp_cpu_inc() to aggregate task and TG clamp values. Every time a task is enqueued, it's accounted in the clamp_bucket defining the smaller clamp between the task specific value and its TG effective value. This allows to: 1. ensure cgroup clamps are always used to restrict task specific requests, i.e. boosted only up to the effective granted value or clamped at least to a certain value 2. implement a "nice-like" policy, where tasks are still allowed to request less then what enforced by their current TG This mimics what already happens for a task's CPU affinity mask when the task is also in a cpuset, i.e. cgroup attributes are always used to restrict per-task attributes. Do this by exploiting the concept of "effective" clamp, which is already used by a TG to track parent enforced restrictions. Apply task group clamp restrictions only to tasks belonging to a child group. While, for tasks in the root group or in an autogroup, only system defaults are enforced. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org>
-
The clamp values are not tunable at the level of the root task group. That's for two main reasons: - the root group represents "system resources" which are always entirely available from the cgroup standpoint. - when tuning/restricting "system resources" makes sense, tuning must be done using a system wide API which should also be available when control groups are not. When a system wide restriction is available, cgroups should be aware of its value in order to know exactly how much "system resources" are available for the subgroups. Utilization clamping supports already the concepts of: - system defaults: which define the maximum possible clamp values usable by tasks. - effective clamps: which allows a parent cgroup to constraint (maybe temporarily) its descendants without losing the information related to the values "requested" from them. Exploit these two concepts and bind them together in such a way that, whenever system default are tuned, the new values are propagated to (possibly) restrict or relax the "effective" value of nested cgroups. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org>
-
In order to properly support hierarchical resources control, the cgroup delegation model requires that attribute writes from a child group never fail but still are (potentially) constrained based on parent's assigned resources. This requires to properly propagate and aggregate parent attributes down to its descendants. Let's implement this mechanism by adding a new "effective" clamp value for each task group. The effective clamp value is defined as the smaller value between the clamp value of a group and the effective clamp value of its parent. This is the actual clamp value enforced on tasks in a task group. Since it can be interesting for userspace, e.g. system management software, to know exactly what the currently propagated/enforced configuration is, the effective clamp values are exposed to user-space by means of a new pair of read-only attributes cpu.util.{min,max}.effective. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> --- Changes in v7: Others: - ensure clamp values are not tunable at root cgroup level
-
The cgroup CPU bandwidth controller allows to assign a specified (maximum) bandwidth to the tasks of a group. However this bandwidth is defined and enforced only on a temporal base, without considering the actual frequency a CPU is running on. Thus, the amount of computation completed by a task within an allocated bandwidth can be very different depending on the actual frequency the CPU is running that task. The amount of computation can be affected also by the specific CPU a task is running on, especially when running on asymmetric capacity systems like Arm's big.LITTLE. With the availability of schedutil, the scheduler is now able to drive frequency selections based on actual task utilization. Moreover, the utilization clamping support provides a mechanism to bias the frequency selection operated by schedutil depending on constraints assigned to the tasks currently RUNNABLE on a CPU. Giving the mechanisms described above, it is now possible to extend the cpu controller to specify the minimum (or maximum) utilization which should be considered for tasks RUNNABLE on a cpu. This makes it possible to better defined the actual computational power assigned to task groups, thus improving the cgroup CPU bandwidth controller which is currently based just on time constraints. Extend the CPU controller with a couple of new attributes util.{min,max} which allows to enforce utilization boosting and capping for all the tasks in a group. Specifically: - util.min: defines the minimum utilization which should be considered i.e. the RUNNABLE tasks of this group will run at least at a minimum frequency which corresponds to the min_util utilization - util.max: defines the maximum utilization which should be considered i.e. the RUNNABLE tasks of this group will run up to a maximum frequency which corresponds to the max_util utilization These attributes: a) are available only for non-root nodes, both on default and legacy hierarchies, while system wide clamps are defined by a generic interface which does not depends on cgroups b) do not enforce any constraints and/or dependencies between the parent and its child nodes, thus relying: - on permission settings defined by the system management software, to define if subgroups can configure their clamp values - on the delegation model, to ensure that effective clamps are updated to consider both subgroup requests and parent group constraints c) have higher priority than task-specific clamps, defined via sched_setattr(), thus allowing to control and restrict task requests This patch provides the basic support to expose the two new attributes and to validate their run-time updates, while we do not (yet) actually allocated clamp buckets. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org>
-
The Energy Aware Scheduler (AES) estimates the energy impact of waking up a task on a given CPU. This estimation is based on: a) an (active) power consumptions defined for each CPU frequency b) an estimation of which frequency will be used on each CPU c) an estimation of the busy time (utilization) of each CPU Utilization clamping can affect both b) and c) estimations. A CPU is expected to run: - on an higher than required frequency, but for a shorter time, in case its estimated utilization will be smaller then the minimum utilization enforced by uclamp - on a smaller than required frequency, but for a longer time, in case its estimated utilization is bigger then the maximum utilization enforced by uclamp While effects on busy time for both boosted/capped tasks are already considered by compute_energy(), clamping effects on frequency selection are currently ignored by that function. Fix it by considering how CPU clamp values will be affected by a task waking up and being RUNNABLE on that CPU. Do that by refactoring schedutil_freq_util() to take an additional task_struct* which allows EAS to evaluate the impact on clamp values of a task being eventually queued in a CPU. Clamp values are applied to the RT+CFS utilization only when a FREQUENCY_UTIL is required by compute_energy(). Do note that switching from ENERGY_UTIL to FREQUENCY_UTIL in the computation of cpu_util signal implies that we are more likely to estimate the higherst OPP when a RT task is running in another CPU of the same performance domain. This can have an impact on energy estimation but: - it's not easy to say which approach is better, since it quite likely depends on the use case - the original approach could still be obtained by setting a smaller task-specific util_min whenever required Since we are at that: - rename schedutil_freq_util() into schedutil_cpu_util(), since it's not only used for frequency selection. - use "unsigned int" instead of "unsigned long" whenever the tracked utilization value is not expected to overflow 32bit. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> --- Changes in v7: Message-ID: <20190122151404.5rtosic6puixado3@queper01-lin> - add a note on side-effects due to the usage of FREQUENCY_UTIL for performance domain frequency estimation - add a similer note to this changelog
-
Currently uclamp_util() allows to clamp a specified utilization considering the clamp values requested by RUNNABLE tasks in a CPU. Sometimes however, it could be interesting to verify how clamp values will change when a task is going to be running on a given CPU. For example, the Energy Aware Scheduler (EAS) is interested in evaluating and comparing the energy impact of different scheduling decisions. Add uclamp_util_with() which allows to clamp a given utilization by considering the possible impact on CPU clamp values of a specified task. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
-
Each time a frequency update is required via schedutil, a frequency is selected to (possibly) satisfy the utilization reported by each scheduling class. However, when utilization clamping is in use, the frequency selection should consider userspace utilization clamping hints. This will allow, for example, to: - boost tasks which are directly affecting the user experience by running them at least at a minimum "requested" frequency - cap low priority tasks not directly affecting the user experience by running them only up to a maximum "allowed" frequency These constraints are meant to support a per-task based tuning of the frequency selection thus supporting a fine grained definition of performance boosting vs energy saving strategies in kernel space. Add support to clamp the utilization of RUNNABLE FAIR and RT tasks within the boundaries defined by their aggregated utilization clamp constraints. Do that by considering the max(min_util, max_util) to give boosted tasks the performance they need even when they happen to be co-scheduled with other capped tasks. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> --- Changes in v7: Message-ID: <CAJZ5v0j2NQY_gKJOAy=rP5_1Dk9TODKNhW0vuvsynTN3BUmYaQ@mail.gmail.com> - merged FAIR and RT integration patches in this one Message-ID: <20190123142455.454u4w253xaxzar3@e110439-lin> - dropped clamping for IOWait boost Message-ID: <20190122123704.6rb3xemvxbp5yfjq@e110439-lin> - fixed go to max for RT tasks on !CONFIG_UCLAMP_TASK
-
By default FAIR tasks start without clamps, i.e. neither boosted nor capped, and they run at the best frequency matching their utilization demand. This default behavior does not fit RT tasks which instead are expected to run at the maximum available frequency, if not otherwise required by explicitly capping them. Enforce the correct behavior for RT tasks by setting util_min to max whenever: 1. a task is switched to the RT class and it does not already have a user-defined clamp value assigned. 2. a task is forked from a parent with RESET_ON_FORK set. NOTE: utilization clamp values are cross scheduling class attributes and thus they are never changed/reset once a value has been explicitly defined from user-space. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
-
A forked tasks gets the same clamp values of its parent however, when the RESET_ON_FORK flag is set on parent, e.g. via: sys_sched_setattr() sched_setattr() __sched_setscheduler(attr::SCHED_FLAG_RESET_ON_FORK) the new forked task is expected to start with all attributes reset to default values. Do that for utilization clamp values too by caching the reset request and propagating it into the existing uclamp_fork() call which already provides the required initialization for other uclamp related bits. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
-
The SCHED_DEADLINE scheduling class provides an advanced and formal model to define tasks requirements that can translate into proper decisions for both task placements and frequencies selections. Other classes have a more simplified model based on the POSIX concept of priorities. Such a simple priority based model however does not allow to exploit most advanced features of the Linux scheduler like, for example, driving frequencies selection via the schedutil cpufreq governor. However, also for non SCHED_DEADLINE tasks, it's still interesting to define tasks properties to support scheduler decisions. Utilization clamping exposes to user-space a new set of per-task attributes the scheduler can use as hints about the expected/required utilization for a task. This allows to implement a "proactive" per-task frequency control policy, a more advanced policy than the current one based just on "passive" measured task utilization. For example, it's possible to boost interactive tasks (e.g. to get better performance) or cap background tasks (e.g. to be more energy/thermal efficient). Introduce a new API to set utilization clamping values for a specified task by extending sched_setattr(), a syscall which already allows to define task specific properties for different scheduling classes. A new pair of attributes allows to specify a minimum and maximum utilization the scheduler can consider for a task. Do that by checking and validating the required clamp values before and then applying the required changes using _the_ same pattern already in use for __setscheduler(). This ensures that the task is re-enqueued with the new clamp values. Do not allow to change sched class specific params and non class specific params (i.e. clamp values) at the same time. This keeps things simple and still works for the most common cases since we are usually interested in just one of the two actions. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> --- Changes in v7: Message-ID: <20190124123814.GM13777@hirez.programming.kicks-ass.net> - split validation code from actual state changing code - for state changing code, use _the_ same pattern __setscheduler() and other code already use, i.e. dequeue-change-enqueue - add SCHED_FLAG_KEEP_PARAMS and use it to skip __setscheduler() when policy and params are not specified
-
The sched_setattr() syscall mandates that a policy is always specified. This requires to always know which policy a task will have when attributes are configured and it makes it impossible to add more generic task attributes valid across different scheduling policies. Reading the policy before setting generic tasks attributes is racy since we cannot be sure it is not changed concurrently. Introduce the required support to change generic task attributes without affecting the current task policy. This is done by adding an attribute flag (SCHED_FLAG_KEEP_POLICY) to enforce the usage of the current policy. This is done by extending to the sched_setattr() non-POSIX syscall with the SETPARAM_POLICY policy already used by the sched_setparam() POSIX syscall. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> --- Changes in v7: Message-ID: <20190125135646.j4j2onitam4mwvcw@google.com> - fix definition of SCHED_POLICY_MAX
-
Tasks without a user-defined clamp value are considered not clamped and by default their utilization can have any value in the [0..SCHED_CAPACITY_SCALE] range. Tasks with a user-defined clamp value are allowed to request any value in that range, and we unconditionally enforce the required clamps. However, a "System Management Software" could be interested in limiting the range of clamp values allowed for all tasks. Add a privileged interface to define a system default configuration via: /proc/sys/kernel/sched_uclamp_util_{min,max} which works as an unconditional clamp range restriction for all tasks. The default configuration allows the full range of SCHED_CAPACITY_SCALE values for each clamp index. If otherwise configured, a task specific clamp is always capped by the corresponding system default value. Do that by tracking, for each task, the "effective" clamp value and bucket the task has been actual refcounted in at enqueue time. This allows to lazy aggregate "requested" and "system default" values at enqueue time and simplify refcounting updates at dequeue time. The cached bucket ids are used to avoid (relatively) more expensive integer divisions every time a task is enqueued. An active flag is used to report when the "effective" value is valid and thus the task actually refcounted in the corresponding rq's bucket. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> --- Changes in v7: Message-ID: <20190124123009.2yulcf25ld66popd@e110439-lin> - make system defaults to support a "nice" policy where a task, for each clamp index, can get only "up to" what allowed by the system default setting, i.e. tasks are always allowed to request for less
-
When the task sleeps, it removes its max utilization clamp from its CPU. However, the blocked utilization on that CPU can be higher than the max clamp value enforced while the task was running. This allows undesired CPU frequency increases while a CPU is idle, for example, when another CPU on the same frequency domain triggers a frequency update, since schedutil can now see the full not clamped blocked utilization of the idle CPU. Fix this by using uclamp_rq_dec_id(p, rq, UCLAMP_MAX) uclamp_rq_update(rq, UCLAMP_MAX, clamp_value) to detect when a CPU has no more RUNNABLE clamped tasks and to flag this condition. Don't track any minimum utilization clamps since an idle CPU never requires a minimum frequency. The decay of the blocked utilization is good enough to reduce the CPU frequency. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
-
Utilization clamping allows to clamp the CPU's utilization within a [util_min, util_max] range, depending on the set of RUNNABLE tasks on that CPU. Each task references two "clamp buckets" defining its minimum and maximum (util_{min,max}) utilization "clamp values". A CPU's clamp bucket is active if there is at least one RUNNABLE tasks enqueued on that CPU and refcounting that bucket. When a task is {en,de}queued {on,from} a rq, the set of active clamp buckets on that CPU can change. Since each clamp bucket enforces a different utilization clamp value, when the set of active clamp buckets changes, a new "aggregated" clamp value is computed for that CPU. Clamp values are always MAX aggregated for both util_min and util_max. This ensures that no tasks can affect the performance of other co-scheduled tasks which are more boosted (i.e. with higher util_min clamp) or less capped (i.e. with higher util_max clamp). Each task has a: task_struct::uclamp[clamp_id]::bucket_id to track the "bucket index" of the CPU's clamp bucket it refcounts while enqueued, for each clamp index (clamp_id). Each CPU's rq has a: rq::uclamp[clamp_id]::bucket[bucket_id].tasks to track how many tasks, currently RUNNABLE on that CPU, refcount each clamp bucket (bucket_id) of a clamp index (clamp_id). Each CPU's rq has also a: rq::uclamp[clamp_id]::bucket[bucket_id].value to track the clamp value of each clamp bucket (bucket_id) of a clamp index (clamp_id). The rq::uclamp::bucket[clamp_id][] array is scanned every time we need to find a new MAX aggregated clamp value for a clamp_id. This operation is required only when we dequeue the last task of a clamp bucket tracking the current MAX aggregated clamp value. In these cases, the CPU is either entering IDLE or going to schedule a less boosted or more clamped task. The expected number of different clamp values, configured at build time, is small enough to fit the full unordered array into a single cache line. Add the basic data structures required to refcount, in each CPU's rq, the number of RUNNABLE tasks for each clamp bucket. Add also the max aggregation required to update the rq's clamp value at each enqueue/dequeue event. Use a simple linear mapping of clamp values into clamp buckets. Pre-compute and cache bucket_id to avoid integer divisions at enqueue/dequeue time. Signed-off-by:
Patrick Bellasi <patrick.bellasi@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> --- Changes in v7: Message-ID: <20190123191007.GG17749@hirez.programming.kicks-ass.net> - removed buckets mapping code - use a simpler linear mapping of clamp values into buckets Message-ID: <20190124161443.lv2pw5fsspyelckq@e110439-lin> - move this patch at the beginning of the series, in the attempt to make the overall series easier to digest by moving at the very beginning the core bits and main data structures Others: - update the mapping logic to use exactly and only UCLAMP_BUCKETS_COUNT buckets, i.e. no more "special" bucket - update uclamp_rq_update() to do top-bottom max search
-
Signed-off-by:
Quentin Perret <quentin.perret@arm.com>
-
Signed-off-by:
Quentin Perret <quentin.perret@arm.com>
-
This commit removes the open-coded CPU-offline notification with new common code. In particular, this change avoids calling scheduler code using RCU from an offline CPU that RCU is ignoring. This is a minimal change. A more intrusive change might invoke the cpu_check_up_prepare() and cpu_set_state_online() functions at CPU-online time, which would allow onlining throw an error if the CPU did not go offline properly. Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Russell King <linux@arm.linux.org.uk> Tested-by:
Geert Uytterhoeven <geert+renesas@glider.be>
-
Commit 7ee7ef24 ("scsi: arm64: defconfig: enable configs for Hisilicon ufs") enabled the Hisilicon UFS, which means that we can flash a rootfs to the on-board flash. However, as it stands, the kernel gets stuck on: [ 3.360733] Waiting for root device /dev/sdd10... That seems to be because even though we have SCSI_UFS_HISI=y, SCSI_UFSHCD and SCSI_UFSHCD_PLATFORM are set to 'm', which means the required drivers won't be built-in. We need those to load the rootfs and then load the modules, so set them as built-ins. Signed-off-by:
Valentin Schneider <valentin.schneider@arm.com>
-
Signed-off-by:
Chen Feng <puck.chen@hisilicon.com>
-
Signed-off-by:
John Stultz <john.stultz@linaro.org>
-
Add "hisilicon,hi3660-reboot" node for hi3660. Eventually when we've transitioned to UEFI this can be dropped. As we can then use syscon-reboot-mode. Signed-off-by:
Chen Feng <puck.chen@hisilicon.com> Signed-off-by:
Chen Jun <chenjun14@huawei.com>
-
Signed-off-by:
Valentin Schneider <valentin.schneider@arm.com>
-
This patch adds support for usb on Hikey960. Cc: Chunfeng Yun <chunfeng.yun@mediatek.com> Cc: Wei Xu <xuwei5@hisilicon.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: John Stultz <john.stultz@linaro.org> Cc: Binghui Wang <wangbinghui@hisilicon.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
Currently the "match_existing_only" of usb_gadget_driver in configfs is set to one which is not flexible. Dwc3 udc will be removed when usb core switch to host mode. This causes failure of writing name of dwc3 udc to configfs's UDC attribuite. To fix this we need to add a way to change the config of "match_existing_only". There are systems like Android do not support udev, so adding "match_existing_only" attribute to allow configuration by user is cost little. This patch adds a configfs attribuite for controling match_existing_only which allow user to config "match_existing_only". Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Felipe Balbi <balbi@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Binghui Wang <wangbinghui@hisilicon.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
This driver handles usb hub power on and typeC port event of HiKey960 board: 1)DP&DM switching between usb hub and typeC port base on typeC port state 2)Control power of usb hub on Hikey960 3)Control vbus of typeC port Cc: Chunfeng Yun <chunfeng.yun@mediatek.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Binghui Wang <wangbinghui@hisilicon.com> Cc: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
The Type-C drivers use USB role switch API to inform the system about the negotiated data role, so registering a role switch in the DRD code in order to support platforms with USB Type-C connectors. Cc: Jun Li <lijun.kernel@gmail.com> Cc: Valentin Schneider <valentin.schneider@arm.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Heikki Krogerus <heikki.krogerus@linux.intel.com> Suggested-by:
Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
This patch adds notifier for drivers want to be informed of the usb role switch. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Heikki Krogerus <heikki.krogerus@linux.intel.com> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: John Stultz <john.stultz@linaro.org> Suggested-by:
Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
This driver handles usb phy power on and shutdown for hi3660 Soc of Hisilicon. Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Kishon Vijay Abraham I <kishon@ti.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Shawn Guo <shawnguo@kernel.org> Cc: Pengcheng Li <lpc.li@hisilicon.com> Cc: Jianguo Sun <sunjianguo1@huawei.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Jiancheng Xue <xuejiancheng@hisilicon.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Binghui Wang <wangbinghui@hisilicon.com> Reviewed-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
It needs more time for the device controller to clear the CmdAct of DEPCMD on Hisilicon Kirin Soc. Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Felipe Balbi <balbi@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Binghui Wang <wangbinghui@hisilicon.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
A GCTL soft reset should be executed when switch mode for dwc3 core of Hisilicon Kirin Soc. Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Felipe Balbi <balbi@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Binghui Wang <wangbinghui@hisilicon.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
SPLIT_BOUNDARY_DISABLE should be set for DesignWare USB3 DRD Core of Hisilicon Kirin Soc when dwc3 core act as host. Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Felipe Balbi <balbi@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Binghui Wang <wangbinghui@hisilicon.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
This patch adds support for the poweron and shutdown of dwc3 core on Hisilicon Soc Platform. Cc: Felipe Balbi <balbi@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: John Stultz <john.stultz@linaro.org> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
dt-bindings: misc: Add bindings for HiSilicon usb hub and data role switch functionality on HiKey960 This patch adds binding documentation to support usb hub and usb data role switch of Hisilicon HiKey960 Board. Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Binghui Wang <wangbinghui@hisilicon.com> Signed-off-by:
Yu Chen <chenyu56@huawei.com>
-
- Mar 24, 2019
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4Linus Torvalds authored
Pull ext4 fixes from Ted Ts'o: "Miscellaneous ext4 bug fixes for 5.1" * tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: prohibit fstrim in norecovery mode ext4: cleanup bh release code in ext4_ind_remove_space() ext4: brelse all indirect buffer in ext4_ind_remove_space() ext4: report real fs size after failed resize ext4: add missing brelse() in add_new_gdb_meta_bg() ext4: remove useless ext4_pin_inode() ext4: avoid panic during forced reboot ext4: fix data corruption caused by unaligned direct AIO ext4: fix NULL pointer dereference while journal is aborted
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull scheduler updates from Thomas Gleixner: "Third more careful attempt for this set of fixes: - Prevent a 32bit math overflow in the cpufreq code - Fix a buffer overflow when scanning the cgroup2 cpu.max property - A set of fixes for the NOHZ scheduler logic to prevent waking up CPUs even if the capacity of the busy CPUs is sufficient along with other tweaks optimizing the behaviour for asymmetric systems (big/little)" * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/fair: Skip LLC NOHZ logic for asymmetric systems sched/fair: Tune down misfit NOHZ kicks sched/fair: Comment some nohz_balancer_kick() kick conditions sched/core: Fix buffer overflow in cgroup2 property cpu.max sched/cpufreq: Fix 32-bit math overflow
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf updates from Thomas Gleixner: "A larger set of perf updates. Not all of them are strictly fixes, but that's solely the tip maintainers fault as they let the timely -rc1 pull request fall through the cracks for various reasons including travel. So I'm sending this nevertheless because rebasing and distangling fixes and updates would be a mess and risky as well. As of tomorrow, a strict fixes separation is happening again. Sorry for the slip-up. Kernel: - Handle RECORD_MMAP vs. RECORD_MMAP2 correctly so different consumers of the mmap event get what they requested. Tools: - A larger set of updates to perf record/report/scripts vs. time stamp handling - More Python3 fixups - A pile of memory leak plumbing - perf BPF improvements and fixes - Finalize the perf.data directory storage" [ Note: the kernel part is strictly a fix, the updates are purely to tooling - Linus ] * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (75 commits) perf bpf: Show more BPF program info in print_bpf_prog_info() perf bpf: Extract logic to create program names from perf_event__synthesize_one_bpf_prog() perf tools: Save bpf_prog_info and BTF of new BPF programs perf evlist: Introduce side band thread perf annotate: Enable annotation of BPF programs perf build: Check what binutils's 'disassembler()' signature to use perf bpf: Process PERF_BPF_EVENT_PROG_LOAD for annotation perf symbols: Introduce DSO_BINARY_TYPE__BPF_PROG_INFO perf feature detection: Add -lopcodes to feature-libbfd perf top: Add option --no-bpf-event perf bpf: Save BTF information as headers to perf.data perf bpf: Save BTF in a rbtree in perf_env perf bpf: Save bpf_prog_info information as headers to perf.data perf bpf: Save bpf_prog_info in a rbtree in perf_env perf bpf: Make synthesize_bpf_events() receive perf_session pointer instead of perf_tool perf bpf: Synthesize bpf events with bpf_program__get_prog_info_linear() bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() tools lib bpf: Introduce bpf_program__get_prog_info_linear() perf record: Replace option --bpf-event with --no-bpf-event perf tests: Fix a memory leak in test__perf_evsel__tp_sched_test() ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Thomas Gleixner: "A set of x86 fixes: - Prevent potential NULL pointer dereferences in the HPET and HyperV code - Exclude the GART aperture from /proc/kcore to prevent kernel crashes on access - Use the correct macros for Cyrix I/O on Geode processors - Remove yet another kernel address printk leak - Announce microcode reload completion as requested by quite some people. Microcode loading has become popular recently. - Some 'Make Clang' happy fixlets - A few cleanups for recently added code" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/gart: Exclude GART aperture from kcore x86/hw_breakpoints: Make default case in hw_breakpoint_arch_parse() return an error x86/mm/pti: Make local symbols static x86/cpu/cyrix: Remove {get,set}Cx86_old macros used for Cyrix processors x86/cpu/cyrix: Use correct macros for Cyrix calls on Geode processors x86/microcode: Announce reload operation's completion x86/hyperv: Prevent potential NULL pointer dereference x86/hpet: Prevent potential NULL pointer dereference x86/lib: Fix indentation issue, remove extra tab x86/boot: Restrict header scope to make Clang happy x86/mm: Don't leak kernel addresses x86/cpufeature: Fix various quality problems in the <asm/cpu_device_hd.h> header
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull timer fixes from Thomas Gleixner: "A set of small fixes plus the removal of stale board support code: - Remove the board support code from the clpx711x clocksource driver. This change had fallen through the cracks and I'm sending it now rather than dealing with people who want to improve that stale code for 3 month. - Use the proper clocksource mask on RICSV - Make local scope functions and variables static" * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: clocksource/drivers/clps711x: Remove board support clocksource/drivers/riscv: Fix clocksource mask clocksource/drivers/mips-gic-timer: Make gic_compare_irqaction static clocksource/drivers/timer-ti-dm: Make omap_dm_timer_set_load_start() static clocksource/drivers/tcb_clksrc: Make tc_clksrc_suspend/resume() static clocksource/drivers/clps711x: Make clps711x_clksrc_init() static time/jiffies: Make refined_jiffies static
-