Home Articles Downloads Forum Products Services EBME Expo Contact
Previous Thread
Next Thread
Print Thread
Rate Thread
Page 1 of 4 1 2 3 4
#5322 10/08/06 4:23 PM
Joined: Sep 2006
Posts: 4
Newbie
OP Offline
Newbie
Joined: Sep 2006
Posts: 4
I am currently evaluating the Rigel 277, QA-90 and 601 electrical safety testers. Having experience in general appliance testing, I am querying why only the 277 tests at 100mA.
Can anyone tell me if they have come across medical kit that requires a 100mA test as opposed to the 1A 10A and 25A. In general appliance testing this is normally due to solid state IC's in use such as IT equipment, is this unlikely to be seen in the medical industry, and therefore not needed?

Any comments would be much appreciated.

Regards


Regards

Colin
#5323 10/08/06 7:45 PM
A
Anonymous
Unregistered
Anonymous
Unregistered
A
I think, i.e. generally it's accepted, that the objective of testing the earth on Class 1 electrical medical devices (or any class 1 device for that matter) is to test for earth continuity and adequate cross-sectional area, as indicated by earth resistance, but not necessarily to destruction. For type-testing to BS EN 60601-1 (general safety standard for electrical medical equipment) 1.5x the device load current or up to 25A ac, maximum, at 6Vac maximum, is used to measure this. Unfortunately at 25A it is possible to damage earth connections if the test-current is applied, to a DUT that is intended to be type-tested at a lower current, for too long.

So there's type testing and routine testing, such as scheduled maintenance or after repairs, to consider. My view is that routine testing requires lower earth-bond test current than type-testing, generally speaking. In my experience, since class 1 devices with load currents of less than 7A are common and I've not had that many problems over the years using testers producing 25A test current, then a time-limited 10A earth-bond test in the right hands is probably not going to cause many problems during routine testing of most Class 1 devices, including those with functional earths. A compromise.

Cable ratings have a big part to play in this, i.e. the cross-sectional area of 0.75mm^2 mains cable is rated at 6A continuous and 1.0mm^2 cable is rated at 10A continuous, for example. However a large number of devices are connected using 10A rated IEC320 detachable mains leads these days. Anyhow the earth-bond test current is usually time-limited. We could avoid the potential for damage due to internal heating (I^2.R) by using a pocket multimeter to test earth-bond but whilst this tests for earth continuity and may indicate a low resistance that's within specified limits, it may not detect that 20 strands out of a 24 strand 24/0.2 earth cable had failed, for example.

Subsequently, under fault condition, the 4 remaining strands could fuse before the mains fuse ruptures under fault conditions, or break the next time the cable is flexed (or in the period between testing), rendering the Class 1 protection useless. Personally speaking I'd rather have a dodgy protective earth bond fail during testing at 10A, when I'm on hand to observe it whilst testing, than eventually have it fail when flexed in use....

In my opinion lower earth-bond test current means, potentially, a less effective test for cross-sectional area and potentially a less sensitive voltage measurement dervied from earth resistance x test current. The benefits of lower current may be a reduced error due to changes in voltage across the single-wire type of test circuit, as its connection resistance or test current fluctuates, reduced size and weight of safety tester due to lower power component requirements but potentially this means lower sensitivity to changes in resistance if simple measuring circuits are used.

Ideally a 4-wire earth-bond resistance measurement should be used to overcome errors due to changes in the resistance of the measurement circuit connections or changes in earth bond current. Thus giving an accurate measurement that is not influenced by the resistance of connections in the tester or due to fluctuations in earth bond current. At lower test currrents we need to measure resistance more accurately and be resolve smaller changes in test voltage in the IUT due to change in earth resistance/current.

Then we only measure the voltage across the earth-bond due to the current passing through the protective earth, not the voltage across the resistance in the tester connections if the tester is not nulled, when there are changes in test lead resistance or test conditions. We then avoid this business of having to null the resistance in the test connections every time we switch on the tester or change the earth test connection.

We need to pass current through the protective earth circuit at relatively low-voltage and current to test its integrity (earth continuity and cross-sectional area) without damaging the protective or functional earth when tested routinely. 1A is suggested in DB9801 (MHRA guidelines for acceptance testing of medical devices) so that there is least likelihood of damaging devices on acceptance/commissioning. For routine testing in Austria and Germany I believe 200mA may be the standard earth-bond test current for medical devices (DIN VDE 0751).

When currents approaching 25A pass through the protective earth circuit in Class 1 equipment and the resulting voltage is limited to 6Vac then internal electronics shouldn't be affected. However some devices with functional earths may have sensitive components in series or voltage-sensitive components (diodes, resistances, "zero-Ohm" links or fusible links) connected between the functional earth(s) and "parallel" protective earths so it may be possible to effectively "short together" and fuse any parallel functional and protective earths internally as high test currents divide in these circuit paths.

#5324 11/08/06 7:48 AM
Joined: Mar 2001
Posts: 208
Ken Offline
Master
Offline
Master
Joined: Mar 2001
Posts: 208
The IPEM are about to issue new quidelines on electrical safety testing and I believe that they will be recommending 1 amp for testing the earth wire

#5325 11/08/06 9:07 AM
A
Anonymous
Unregistered
Anonymous
Unregistered
A
The copy of the draft standard IEC62353 "Medical Electrical Equipment - recurrent test after repair of medical equipment" I have makes interesting reading. It's adapted from the Austrian E 8751-1 standard - the basic philosophy is to apply safety tests that do not damage the IUT, ensure the safety of the person doing the testing and the number of tests that determine safety are minimised to those that are most important, reproducible and comparable. Interestingly it doesn't prescribe an earth-bond test current (yet).

In my opinion, for earth-bond tests to be reproducible and comparable between tests, then 4-wire resistance measurement, using a calibrated device that doesn't require nulling by the operator, is the way to go.

#5326 11/08/06 1:01 PM
Joined: Oct 2005
Posts: 49
Technologist
Offline
Technologist
Joined: Oct 2005
Posts: 49
Hi Colin

In reply to you no I havent come across testing at 100mA.

The only testing we have done at those kinds of levels is when testing RCD's.

We currently have two 601 pro,s and they dont go down that low, that is on the 601-1 setup.

Billy

#5327 11/08/06 1:21 PM
Joined: May 2002
Posts: 137
Expert
Offline
Expert
Joined: May 2002
Posts: 137
Since I have been in this business I have been using a 601 PRO which routineley tests at 25A. Due to comments on this web site that 25A is intended as a type test I have recently been trying to test at 1A. A reasonsble number of equipments fail at 1A but pass at 25A. I can not understand this completely. Some of the problem may be due to the fact that contact surfaces may need 'whetting' by the higher current but in fixed wiring systems, without plugs/sockets in circuit I find this difficult to accept.
Has anyone had similar experiences or have some alternative explanation.

#5328 12/08/06 9:55 AM
A
Anonymous
Unregistered
Anonymous
Unregistered
A
Rick,

I do agree with you and have heard of wetting currents - in the context of the minimum current required to be passed through switch contacts to ensure they remain free of oxidation, corrosion, and debris, whatever. As you seem to imply smaller wetting currents may not clear debris and oxidation at connections in the earth circuit as effectively as larger currents do, thus maintaining a low earth-bond resistance.

However, with regard to your comments concerning comparison of resistance measurements, it appears to me that you may be assuming that the accuracy of the earth-bond measurement at 1A is the same as that at 25A. In my view this is unlikely for reasons I mentioned briefly in my earlier post and will expand on.

Have you considered that at lower test currents the measurement of resistance may be more susceptible to errors due to fluctuations in the constant test current and changes in temperature? Another consideration is that a lower test current effectively gives a poorer signal to noise ratio for a given resistance measurement (even assuming a Kelvin measurement system, i.e. 4 wire, is used). At lower test currents the measurement system must therefore be capable of resolving over a potentially smaller voltage range for very low resistances.

With the possibility of poorer SNR at low test currents, i.e. potentially lower sensitivity that implies greater uncertainties, then variability in test current and poor measurement resolution may play a significant part in producing errors in the measurement that appear to be failures or discrepancies when compared to higher test currents when applied to the same DUT (I'm inferring that the measurement accuracy is likely to be different for different test currents and that this detrimentally influences the readings obtained).

Stability and precision of the test current, over time and with temperature changes particularly, may become more important at lower test currents - as does the ability to resolve these small changes in the voltage measurement derived from IxR - if accuracy of the resistance measurement is to be maintained.

Depending on the ability of the measuring circuit of some testers to resolve changes in voltage derived from current through the earth-bond over large dynamic ranges, e.g. 1A, 10A or 25A (25:1 ratio requires some scaling factors I expect) it's possible that at lower test currents the ability of the ES tester to both measure resistance accurately & resolve small changes may be questionable at low test currents, relatively speaking, than at higher test currents (this depends on how good the design of the tester is).

In practice the maximum earth-bond test voltage of commercial testers is likely to vary with test current to maintain a scaling factor that enables a common measurement circuit to maintain sensitivity (not necessarily maintain the same accuracy). However I doubt a 25:1 ratio in scaling the voltage across the DUT earth connection is possible if constant earth-bond test currents are switched from 25A (type testing value) at 6V O/C to 1A (DB9801 recommended) at 24V O/V for example. I think it's likely that at lower currents the O/C test voltage will tend to increase in value but they can't be excessive for safety reasons and to prevent damage to electrical components.

Since it's unlikely that voltage scaling can be maintained without allowing excessive earth-bond test voltages this is likely to mean that, for different, constant earth-bond test currents, commercially available testers are likely have different resistance measurement capabilities, i.e. resistance ranges, measurement resolutions and accuracies, built-in. Are these stated explicitly in manufacturers’ specifications? Without errors being taken into account is it possible that significant errors in measurements can lead to indicated readings that are outside acceptable limits?

EST manufacturers and us as Engineers should be looking at methods of maximising the accuracy of the resistance measurement in the earth circuit without producing the undesirable heating effects that could damage the protective circuit, in my opinion.

Going back to Rick’s posting, I wonder if meaningful comparisons can be made between resistances measured on the same DUT, using different test currents (or even between ES testers using the same earth-bond test current), without the accuracy of measurements having been established for each test current?

In practice, in my opinion, the potential for variation in the performance of testers used to measure earth-bond resistance at different test currents could lead to the use of test currents that do not give the best achievable accuracy and may produce discrepancies between resistance measurements at different earth-bond test currents on the same DUT or even between different models of ES testers operating at the same test current with apparently identical accuracy specifications (if display and/or measurement resolution is not taken into account).

For recording of meaningful results and to maintain repeatable, comparable, earth-bond resistance measurements perhaps it's important to examine the specifications of commercially available testers in more detail (especially the ones where full accuracy specifications are not published in detail), look at the specified accuracy of resistance measurements at different test currents, the accuracy of different models, the measurement methods used and examine the effects of wetting current.

Maybe then we can decide on a single model of ES tester that's going to give consistently accurate measurements whatever the earth-bond test current, then choose the most appropriate test current for the job, and stick to it.

When I look at some of the specifications for resistance measurements on ES testers some look quite meaningless since at the resistance values we're looking at the error of reading may be very significant due to a lack of adequate resolution in the display, despite having a specified measurement accuracy that's much lower.

This is of significance if you're looking to compare subsequent earth-bond readings. This gives me the impression that testers need to be standardised on one model and that the earth-bond test current needs to be selected and fixed at a reasonable value that's adequate for the majority of situations.

What's the point of discussing earth-bond test currents if the numbers that come out of the measurement are not accurate and repeatable? The lower the test current the greater the accuracy and repeatability needs to be so tiny fluctutions in the earth resistance can be observed when flexing the mains cable during testing. If we're not testing cross-sectional area by passing a significant current then we need to measure resistance (more importantly changes over time at subsequent tests) accurately whilst physically manipulating the cable and looking for problems.

Given the information in the published specifications for ES testers (or lack of it in some cases) I'd be surprised if they all gave the same resistance measurement accuracy at different test currents despite the readings being within the accuracy implied. My concern would be that for ES testers with significant inaccuracies that this could mean there are significant discrepancies at different test currents in some devices. The Biotek 601plus specification quotes enough information to arrive at figures for errors in resistance measurement at different currents without having to make assumptions as does the Gossen Metrawatt SIII (with 4-wire measurement).

Unfortunately the potential for very significant errors in the QA-90 resistance measurement are apparent, if I've interpreted the specifications correctly and I assume the worst-case figures quoting a potetential +/-2% error full scale (2Ohm) at 1A or 25A, there's no mention of variation in performance at other test currents (do I assume this is the case?). The Rigel 266 and 277 specifications I've looked at on their website don't even mention resistance measurement tolerances.

Not many of the safety analyser specifications for resistance measurement capabilities I've looked at are particularly "transparent" and a few of the specifications I've looked at are obviously incomplete. Worst case it looks to me that their accuracy is not so cut and dry as it appears in the specifications - a couple of popular models have errors greater than or equal to 25% worst-case when manufacturer quoted error due to display resolution is taken into account including significant variation for different test currents.

Going back to Rick's comments - could error in resistance measurements taken using a Biotek 601plus at 25A result in discrepancies with measurements at 1A? Possibly - even though the readings obtained may not be outside the quoted specifications at these two test currents (quoted acccuracy for the Biotek601plus is the same at 1A and 25A but it is significantly different at 10A I note).

For example if the earth-bond resistance is approaching the acceptable limit then perhaps the contribution of errors in the measurement at different test currents, for example, is enough to give a discrepancy that (assuming it's not compensated for in the tester and despite the measurement being withhin the manufacturers quoted accuracy) produces "a fail". I know the Secutest III takes instrument calibration into account, automatically, does the Biotek601plus?

#5329 12/08/06 3:40 PM
Joined: Sep 2000
Posts: 160
Mentor
Offline
Mentor
Joined: Sep 2000
Posts: 160
Hi all

We noticed the 1A vs 25A problem on a Pro601 a few years ago - many things that had passed at 25A suddenly failed at 1A. It seemed that the measured resistance at 1A was higher than at 25A. Same thing was found on our newly acquired Rigel So we wired some (big) resistors to plugs and measured them. Somewhat surprising they seemed to show the same resistance at any current - they were allowed to cool betwen tests of course before someone asks.

I didn't understand, I don't understand, I didn't want to change to 1A anyway so we went back to 25A testing. Do things blow up - do they heck.

You can have all the theory you like but the earth cable is there to blow the fuse in the event of a L-E fault and I do like to know it will. 25A for 12 seconds tests a wire, lower current or shorter times do not. OK you could damage a cable - but just try to damage one and see how long it takes - not allowing it to cool between tests is cheating.

#5330 12/08/06 8:56 PM
A
Anonymous
Unregistered
Anonymous
Unregistered
A
Quote:
Somewhat surprising they seemed to show the same resistance at any current
That's not surprising to me since what I've been saying is that I don't believe the instruments we're using can necesssarily measure low Ohms as well as we should expect them to and more importantly, perhaps, there appears to be a flaw in your argument.

Theory dictates that as a resistor heats up and being predominantly a metallic element or an alloy it's resistance will tend to increase (or even decrease for some alloys) not remain the same - it will also expand of course, however, the effects of expansion on it's dimensions can be ignored since it's resistivity that's mainly influenced by temperature changes, particularly in copper, i.e. a cable.

Comparing resistance changes in a resistor of one metal, designed to operate over a range of temperatures, possibly with heatsinking or dissipative cladding and much lower coefficients of resistance (as do metals & alloys used in power resistor construction), to a length of copper strands wrapped in a PVC sheath may not be a particularly useful comparison Graham.

Anyhow, without getting into too much theory, I'd expect a resistor (or a cable in particular) to warm up a tad more, hence change it's resistance more, with 25A flowing through it , from room temparature, than I would with 1A flowing through it for the same time under the same conditions. Thus being a simple lad, at heart, I'd expect the initially identical temperatures, thus resistance, to be different after time, t, not the same.

Using a simple formula just to illustrate what I'm getting at: R = R0[1+alpha(T-Tref)] where R0 is initial resistance, aplha the temperature cooefficient of resistance for Copper at 20C of 0.004041, Tref at 20c and T at 75C. Ignoring expansion, if a 0.1Ohm Copper cable heats up to 75C from 20C, across it's length due to a significant current passing through it (thus power dissipated in it), then its resistance should change by up to 22% by my simplistic reckoning.

On the other hand power wire-wound resistors with a low temperature coefficient, of less than ±20×10-6/K, use a resistive element made of Constantan (Nickel and Copper) or Nichrome (Nickel and Chromium). Again, to illustrate, Constantan is used for lower value power resistors with an alpha of -0.000074/K. Repeating the calculation above assuming an equivalent Constantan cable or wire is 0.1Ohm at 20C heated up to 75C, ie. the same conditions to apply, then the variation in resistance is in the region of -0.5%.

Should we expect to see a significant change in resistance due to temperature in a wire-wound resitor, for example, Graham? I think not - no surprise there then. What I'm saying is the change in resistance of Copper with temperature (and over the same time with different values of constant current dissipated in it) is much more than that of Constantan (or most other alloys used in resistor manufacture in fact).

Apply different currents for the same time through a copper conductor at the same initial temperature under the same condtions and I'd expect the higher current to produce heating in the cable and more importantly an observable change in resistance (if the test instrument is up to it). Especially Copper cables - that's just physics.

Since the effect is expected to be significant, in practical terms, I'd expect a measuring instrument to pick up on this if it's not infuenced by changing test conditions or poor design. Personally I think that it's difficult to comment on resistance measurement performance, at any current, until we can make accurate and repeatable resistance measurements in the milliOhm range over the range of currents we're interested in using.

What I've stated is that I think we've got to ensure that the instruments (and methods, of course) we're making comparisons with are up to making those comparisons before we can progress to deciding what test current to use - since the current obviously has effects in conductors that are well documented in theory - nevermind "wetting currents".

Let's make accurate resistance measurements first, using established methods - we need to have repeatable and accurate, i.e. meaningful measurements that are comparable, if we're to investigate wetting currents and suchlike. Why would Rick's test at 1A fail and then pass at 25A if the tester wasn't detecting a change in apparent resistance? What's the difference between a low-ohms resistor and a multi-strand wire? Quite a bit in my opinion, as I've tried to illustrate.

Personally speaking I do think that the earth-bond test current may have a significant "wetting effect" but at the end of the day, if you're measuring resistance, passing a constant current through a resistance then you should obtain a voltage proportional to this. My attitude is that I'd be willing to let someone who knows what they're doing loose with a 25A test current.

However I have seen problems with failure of medical systems, i.e. interconnected class 1 devices, that I've attributed to poorly designed earth-bonding methods damaged by "excessive" current - not my test methods (naturally).

What's really at issue, with regard to these so-called recommendations on test current, is whether we can rely on individuals to perform tests and be fully aware of the potential issues associated with pushing high currents through devices during testing - we should not assume that all operators are highly skilled or that everyone's aware when they damage an earth conductor or earth-referenced functional path.

I'd prefer 10A, considering that "wetting current" may still be an issue, since I think 25A is too high for the devices that I typically and routinely test. Most devices are connected with 10A cable so maybe 10-15A is more than adequate for our purposes, i.e routine testing.

Incidentally were you using "big resistors" with values down in the region of 0.1Ohm Graham? In my view the capabilities indicated by the specifications for testers I've briefly examined when evaluating them may give rise to significant variations in performance as the resistance we're looking at changes - so we need to be comparing like performance in the range of resistances we're interested in measuring as well as comparable methods.

#5331 14/08/06 10:03 AM
Joined: Sep 2000
Posts: 160
Mentor
Offline
Mentor
Joined: Sep 2000
Posts: 160
You are saying either the instruments are wrong or the effect is real.

Rick said things failed at 1A, i.e. resistance is going up with reduced current and temperature. Generally copper doesn't do this.

My tests on resistors showed that the systems were consistent at 1A and 25A - come to that my Fluke agreed and that doesn't provide much in the way of wetting current.

Page 1 of 4 1 2 3 4

Moderated by  DaveC in Oz, RoJo 

Link Copied to Clipboard
Who's Online Now
1 members (daisizhou), 1,343 guests, and 16 robots.
Key: Admin, Global Mod, Mod
Newest Members
j9_PLC, nece, Vitya, Shenzhen007, Eng. Craig
10,357 Registered Users
Forum Statistics
Forums26
Topics11,248
Posts74,481
Members10,357
Most Online37,242
Apr 12th, 2026
Powered by UBB.threads™ PHP Forum Software 7.7.5