Dissolution failures happen, and it is important to properly investigate them when they do happen.  Non-existent or inadequate failure investigations are the biggest reason for a 483 related to dissolution.  When failure occurs, you must follow an investigation to determine what has occurred.  You cannot simply try to run the dissolution again and hope for a passing result.  You also can't abort the data from a failed run unless you have performed an investigation and found a determinant error.


Dissolution is a multi-step experiment which has many factors involved that could lead to a dissolution failure.  Obviously, one reason why a run could fail is that the dosage form itself has a problem - in which case you may need to pull or recall the lot, etc.  There are other potential causes for failures - analytical errors, an issue with the dissolution unit itself, or issues with the materials used in the dissolution test (standards, buffers, etc.).


In an investigation, you may find a determinant error or a non-determinant error.  A determinant error is what you hope to find - a clear error which can be proven to have happened, and could cause the problem being seen.  This could be something like a transcription error in the calculations, a standard prep error, finding a paddle set at the wrong height,, etc..  A non-determinant error is one that is suspected to have caused the failure, but is not definite.  Non-determinant errors could include analyst sampling/filtering errors, suspecting the instrument wasn't properly cleaned before use, or observations made by the analyst that were not documented.  Non-determinant errors can't be proven, and generally will require more investigation and testing of more samples than with determinant errors.


When investigating a dissolution failure, you should adopt a systematic approach so that every potential source of error can be looked at.  I recommend a reverse chronological approach, meaning the last thing done should be the first thing investigated.  This approach is helpful because it can make investigations shorter as well as increase the likelihood of salvaging the run.  For example, you may start an investigation and find that there was a calculation error - in this case, you can correct the error and the investigation ends at this point.


When you have a dissolution failure, it is helpful to determine what that failure looks like.  This can help to focus your investigation into the most likely causes of that failure.  I classify the failures into groups of the entire data set is higher or lower than normal, there is 1 dosage form which is an outlier, there is a high level of variability overall, or there is a single data point which makes no sense.


If the entire data set is either too high or too low, there is a good chance that there has been an error which has impacted the entire result and it might not have anything to do with the actual dissolution.  Calculation errors and transcription errors commonly lead to this issue such as using the wrong standard concentration, dissolution volume, and other factors.  Standard prep errors also would shift all results as well, and can be checked by creating a new standard for comparison against the initial standard used.  Any issues on the dissolution unit would need to be a consistent error such as all the evaporation covers being off or using the wrong height tool for all positions.  Changes in the dosage form or a bad batch can lead to this as well, such as gelatin cross-linking on stability.


If one dosage form is an outlier, then the focus should be on anything that could be different with that position during the test and the sample itself.  Could the filter have fallen off during the dissolution run?  Is that paddle/basket height the same as the others?  Was the correct volume poured into that vessel?  Is there anything else off about the alignment in that position?  Were there any observations of that dosage form that were different from the rest (coning, cross-linking, floating, etc.)?


If there is an overall high CV, I tend to find issues with the analyst technique the most.  Is the proper USP sampling and filtration technique being done?  Was the media degassed properly?  You could also have a dissolution system misalignment such as the head of the dissolution unit becoming slightly tilted - if this is the case you may find that your results trend with the position on the dissolution unit such as the left side has higher results than the right.  There may also be an issue with vibration on the system - especially if the source of vibration is outside the unit such as a bad heater circulator positioned next to the unit.


Finally, there is the data that doesn't make sense such as a very high spike in the data.  In many cases, this is caused by either the sample not being filtered correctly (or at all) or a bad sample reading.  The data spike for filtration would be due to an undissolved drug particle being sampled into the vial or test tube where it then dissolves in the small volume of media there.  If this occurs, analysis of that sample on HPLC would show a normal peak shape.  A bad sample reading could occur due to an air bubbles or other factors.


Read also: Dissolution Method Development and Validation


Resource Person: Ken Boda (Dissolution Product Specialist at Agilent Technologies)