osdir.com


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Problem with dropped mutations


Yes, there are timeouts sometimes but more on the read side. And yes, there are certain data modeling problems which will be soon addressed but we need to keep things steady before we get there. 

I guess many write timeouts go unnoticed due to consistency level != ALL. 

Network looks to be working fine. 

Hannu

> ZAIDI, ASAD A <az192g@xxxxxxx> kirjoitti 26.6.2018 kello 21.42:
> 
> Are you also seeing time-outs on certain Cassandra operations?? If yes, you may have to tweak *request_timeout parameter in order to get rid of dropped mutation messages if application data model is not upto mark!
> 
> You can also check if network isn't dropping packets (ifconfig  -a tool) +  storage (dstat tool) isn't reporting too slow disks.
> 
> Cheers/Asad
> 
> 
> -----Original Message-----
> From: Hannu Kröger [mailto:hkroger@xxxxxxxxx] 
> Sent: Tuesday, June 26, 2018 9:49 AM
> To: user <user@xxxxxxxxxxxxxxxxxxxx>
> Subject: Problem with dropped mutations
> 
> Hello,
> 
> We have a cluster with somewhat heavy load and we are seeing dropped mutations (variable amount and not all nodes have those).
> 
> Are there some clear trigger which cause those? What would be the best pragmatic approach to start debugging those? We have already added more memory which seemed to help somewhat but not completely.
> 
> Cheers,
> Hannu
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@xxxxxxxxxxxxxxxxxxxx
> For additional commands, e-mail: user-help@xxxxxxxxxxxxxxxxxxxx
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@xxxxxxxxxxxxxxxxxxxx
> For additional commands, e-mail: user-help@xxxxxxxxxxxxxxxxxxxx
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@xxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: user-help@xxxxxxxxxxxxxxxxxxxx