SharePoint
Aide
Faire avancer la sûreté nucléaire

La Recherchev2

Publications

Choice of methods for data treatment in reliability test planning (case of accelerated aging tests)


Fermer

Authentification

Email :

Mot de passe :

A. Rodionov, J. Holy,

Conférence Internationale LAD-2004 sur les modèles de longévité, de vieillissement et de dégradation en fiabilité, santé publique, médecine et biologie, St Pétersbourg, Russie, 7-9 juin 2004

Rapport IRSN-DSR/22

Type de document > *Rapport/contribution à GT (papier ou CD-Rom), *Congrès/colloque

Mots clés > sûreté, fiabilité, installation nucléaire, vieillissement

Unité de recherche > IRSN/DSR/SESPRI

Auteurs > RODIONOV Andréi

Date de publication > 15/06/2004

Résumé

Presently, most of reliability models applied for safety and reliability assessments of complex technical systems are based on the "constant failure rate" assumption. In case of equipment prematureate aging, inappropriate maintenance or lifetime extension this assumption could not be valid anymore. Consequently, aging related failure rate model have to be applied.    

One of the issues related to the application of aging related failure rate models is an estimation of model parameters. In particular case, the operational experience data, normally used for constant failure rate calculation, represent only one third of the expected lifetime and couldn't be used for predictive aging model parameters estimation. Due to the absence of representative operational experience data, one from the possible solutions is to perform laboratory reliability tests of selected equipments.

The test approach is based on a simulation in accelerated way of potential aging mechanisms expected during the design and extended life of the component. Those aging mechanisms will initiate the component degradations and could cause the failures.  

The data relevant to "equivalent times to failures" (would be occurred during the test) and to censoring times will be used for estimation of statistical model parameters.

From one side, to assure the statistical representativity of the test results, the size of the sample (number of items to be tested) has to be large enough, from other side, due to the cost of experiments, it has to be as small as possible. It could be notified that existed reliability standards do not propose optimal solution of such problem.

The presentation proposes the strategy to specify the sample size assuming a Weibull distribution for the times to failure and to choose the data treatment approach depending form the number of observed failures. Possibility to use generic data, as well as existed operational experience to combine with the test results is also discussed.