[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: G1GC CPU Spike


Thanks Chris I did attached the gc logs already. reattaching them now

Attachment: gc.log.1.current
Description: Binary data

.

it started yesterday around 11:54PM 
> On Jun 13, 2018, at 3:56 PM, Chris Lohfink <clohfink@xxxxxxxxx> wrote:
> 
>> What is the criteria for picking up the value for G1ReservePercent?
> 
> 
> it depends on the object allocation rate vs the size of the heap. Cassandra ideally would be sub 500-600mb/s allocations but it can spike pretty high with something like reading a wide partition or repair streaming which might exceed what the g1 ygcs tenuring and timing is prepared for from previous steady rate. Giving it a bigger buffer is a nice safety net for allocation spikes.
> 
>> is the HEAP_NEWSIZE is required only for CMS
> 
> 
> it should only set Xmn with that if using CMS, with G1 it should be ignored or else yes it would be bad to set Xmn. Giving the gc logs will give the results of all the bash scripts along with details of whats happening so its your best option if you want help to share that.
> 
> Chris
> 
>> On Jun 13, 2018, at 12:17 PM, Subroto Barua <sbarua116@xxxxxxxxx.INVALID> wrote:
>> 
>> Chris,
>> What is the criteria for picking up the value for G1ReservePercent?
>> 
>> Subroto 
>> 
>>> On Jun 13, 2018, at 6:52 AM, Chris Lohfink <clohfink@xxxxxxxxx> wrote:
>>> 
>>> G1ReservePercent
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@xxxxxxxxxxxxxxxxxxxx
>> For additional commands, e-mail: user-help@xxxxxxxxxxxxxxxxxxxx
>> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@xxxxxxxxxxxxxxxxxxxx
> For additional commands, e-mail: user-help@xxxxxxxxxxxxxxxxxxxx
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@xxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: user-help@xxxxxxxxxxxxxxxxxxxx