Psychology Professor Jordan Peterson explains the clear documented science why it’s relative poverty and not poverty itself that causes crime, AKA the Gini Coefficient He goes on further explaining the role of the male dominance hierarchy in context of relative poverty and crime.
Dr. Peterson’s new book is available for pre-order:
12 Rules for Life: An Antidote to Chaos: http://amzn.to/2yvJf9L
If you want to support Dr. Peterson, here is his Patreon:
Check out Jordan Peterson’s Self Authoring Program, a powerful tool to sort yourself out:
http://bit.ly/selfAuth (Official affiliate link for Bite-sized Philosophy)
ISO 13485 Medical devices — Quality management systems — Requirements for regulatory purposes is an International Organization for Standardization standard published for the first time in 1996; it represents the requirements for a comprehensive quality management system for the design and manufacture of medical devices.
The essentials of validation planning, protocol writing, and change management will be explained.
A counting statistic is simply a numerical count of the number of some item such as “one million missing children”, “three million homeless”, and “3.5 million STEM jobs by 2025.” Counting statistics are frequently deployed in public policy debates, the marketing of goods and services, and other contexts. Particularly when paired with an emotionally engaging story, counting statistics can be powerful and persuasive. Counting statistics can be highly misleading or even completely false. This article discusses how to evaluate counting statistics and includes a detailed list of steps to follow to evaluate a counting statistic.
Checklist for Counting Statistics
- Find the original primary source of the statistic. Ideally you should determine the organization or individual who produced the statistic. If the source is an organization you should find out who specifically produced the statistic within the organization. If possible find out the name and role of each member involved in the production of the statistic. Ideally you should have a full citation to the original source that could be used in a high quality scholarly peer-reviewed publication.
- What is the background, agenda, and possible biases of the individual or organization that produced the statistic? What are their sources of funding?What is their track record, both in general and in the specific field of the statistic? Many statistics are produced by “think tanks” with various ideological and financial biases and commitments.
- How is the item being counted defined. This is very important. Many questionable statistics use a broad, often vague definition of the item paired with personal stories of an extreme or shocking nature to persuade. For example, the widely quoted “one million missing children” in the United States used in the 1980’s — and even today — rounded up from an official FBI number of about seven hundred thousand missing children, the vast majority of whom returned home safely within a short time, paired with rare cases of horrific stranger abductions and murders such as the 1981 murder of six year old Adam Walsh.
- If the statistic is paired with specific examples or personal stories, how representative are these examples and stories of the aggregate data used in the statistic? As with the missing children statistics in the 1980’s it is common for broad definitions giving large numbers to be paired with rare, extreme examples.
- How was the statistic measured and/or computed? At one extreme, some statistics are wild guesses by interested parties. In the early stages of the recognition of a social problem, there may be no solid reliable measurements; activists are prone to providing an educated guess. The statistic may be the product of an opinion survey. Some statistics are based on detailed, high quality measurements.
- What is the appropriate scale to evaluate the counting statistic? For example, the United States Census estimates the total population of the United States as of July 1, 2018 at 328 million. The US Bureau of Labor Statistics estimates about 156 million people are employed full time in May 2019. Thus “3.5 million STEM jobs” represents slightly more than one percent of the United States population and slightly more than two percent of full time employees.
- Are there independent estimates of the same or a reasonably similar statistic? If yes, what are they? Are the independent estimates consistent? If not, why not? If there are no independent estimates, why not? Why is there only one source? For example, estimates of unemployment based on the Bureau of Labor Statistics Current Population Survey (the source of the headline unemployment number reported in the news) and the Bureau’s payroll survey have a history of inconsistency.
- Is the statistic consistent with other data and statistics that are expected to be related? If not, why doesn’t the expected relationship hold? For example, we expect low unemployment to be associated with rising wages. This is not always the case, raising questions about the reliability of the official unemployment rate from the Current Population Survey.
- Is the statistic consistent with your personal experience or that of your social circle?If not, why not? For example, I have seen high unemployment rates among my social circle at times when the official unemployment rate was quite low.
- Does the statistic feel right? Sometimes, even though the statistic survives detailed scrutiny — following the above steps — it still doesn’t seem right. There is considerable controversy over the reliability of intuition and “feelings.” Nonetheless, many people believe a strong intuition often proves more accurate than a contradictory “rational analysis.” Often if you meditate on an intuition or feeling, more concrete reasons for the intuition will surface.
(C) 2019 by John F. McGowan, Ph.D.
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Centerinvolved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
A counting statistic is simply a numerical count of the number of some item such as “one million missing children”, “three million homeless”, and “3.5 million STEM jobs by 2025.” Counting statistics are frequently deployed in public policy debates, the marketing of goods and services, and other contexts. Particularly when paired with an emotionally engaging […]
The resultant of poisoning depends on many factors.
There are number of reasons which can affect intensity of poisoning are further explained, such as;
- Time of intake
- Way of taking
- Environmental factors, etc.
Amount of the poison is determine the affect of it on the body. Smaller the dose, lighter the effect and larger the dose, severe the effect.
After doing continuous use of some drugs, such as opiates, tobacco, alcohol, etc. person develop a resistance towards some drugs.
Incompatible Combination of Drugs
Ingestion of some incompatible combination of Medicines may be fatal. Such As; Prozac and Tramadol, Thyroid medication and proton pump inhibitors, Nonsteroidal anti-inflammatory drugs and antihypertensive, etc.
Some of persons show abnormal response (idiosyncrasy) to a drug like morphine, quinine, aspirin etc. due to inherent personal hypersensitivity.
Some persons are allergic (acquired hypersensitivity) towards certain drugs like penicillin, sulpha, etc.
Ingestion of certain medications like anti – ulcerous gels with aspirin may lead to fatal effects.
People develop a marked tolerance in the case of opium, alcohol, strychnine, tobacco, arsenic and some other narcotic drugs by repeated and continued use.
Some poisonous drugs can be toxic when taken together may cause lethal effect. Such as; Alcohol and Benzodiazepines, Heroine and Cocaine, Benzodiazepines and Opioids, Alcohol and Opioids
The continuous small amount of poison ingestion like arsenic, strychnine, lead, etc. accumulate in body and may cause death.
Conditions of The Body
- Conditions of the body, i.e. age, health, etc. also affect the action of the poison.
- Generally old persons, weaker persons and children severly affected by low dose of poison then young and healthy person.
The repeated small doses of cumulative poisons like arsenic, lead, mercury, strychnine, digitalis etc. may cause death or chronic poisoning by cumulative action.
Some times, a large dose of a poison acts differently from a small dose, for example; a large dose of arsenic may cause death by shock while a small dose results in diarrhoea.
Forms of Poison
- Gases/Vapours Poisons
- Liquid Poisons
- Powder Poisons
- Chemical Combination
- Mechanical Combination
Gases / Vapours Poisons
These types of poison absorbed immediately and act quickly.
These act better than solids.
Fine powdered poison act fast than coarse powdered poison.
Some substances in combination act like lethal, such as; acids and alkali’s, strychnine and tannic acid, etc.
The action of a poison is altered when combined mechanically with inert substances, such as; when alkaloid when taken with charcoal, it does not act.
Methods Of Administration
A poison acts more rapidly when inhaled in gaseous form or when injected intravenously.
Next when inject intramuscularly or subcutaneously.
A poison acts slowly when swallowed or applied on skin.
Watch it, share it and subscribe it : –
By @forensicfield Introduction The resultant of poisoning depends on many factors. There are number of reasons which can affect intensity of poisoning are further explained, such as; Dose. Time of intake Way of taking Environmental factors, etc. Dose Amount of the poison is determine the affect of it on the body. Smaller the dose, lighter […]
Immediately following the release of the autopsy reports on November 19th, I contacted Thomas Mollett, a forensic investigator, fellow true crime author and friend, and asked him his opinion on Shan’anns Blood Alcohol Levels. They were found to be three times the legal limit for driving. How likely was it, I asked, that these apparently high levels were from “normal” decomposition?
Pathology is an extremely complex science, and many factors play into the biological processes that occur after death.
The three basic pillars one uses to calculate whether the BAC is “normal” or not are related to:
- the time the body is exposed to the elements [here time of death is a factor, unknown in this case, but with a relatively short window either way]
- the ambient conditions of the body [temperature, humidity etc.]
- circumstantial evidence is also a vital tool to gauge alcohol content, including eye witnesses, Shan’ann’s drinking habits, and her appearance in the Ring camera footage when she arrived home [described but not released thus far]
During our first communication I miscommunicated to Mollett that Shan’ann’s corpse was recovered after only 48 hours, which I guessed wasn’t enough time to reflect the high alcohol levels found. This was an initial error on my part; it took closer to 70 hours for Shan’ann’s corpse to be discovered and exhumed.
Based on this initial miscommunication, Mollett also believed the BAC level was likely higher than a natural rate [which as I say, was also what I suspected].
I asked Mollett to investigate the BAC levels and I’m grateful to him for doing so in detail. Obviously part of his thorough investigation corrected the original 48 hour error.
Below is Mollet’s unabridged report on the BAC levels.
Immediately following the release of the autopsy reports on November 19th, I contacted Thomas Mollett, a forensic investigator, fellow true crime author and friend, and asked him his opinion on Shan’anns Blood Alcohol Levels. They were found to be three times the legal limit for driving. How likely was it, I asked, that these apparently high […]
Angus Marshall, Digital Forensic Scientist
Where to begin? I have a lot of different roles these days, but by day I’m a Lecturer in Cybersecurity – currently at the University of York, and also run my own digital forensic consultancy business. I drifted into the forensic world almost by accident back in 2001 when a server I managed was hacked. I presented a paper on the investigation of that incident at a forensic science conference and a few weeks later found myself asked to help investigate a missing person case that turned out to be a murder. There’s been a steady stream of casework ever since.
I’m registered as an expert adviser and most of my recent casework seems to deal with difficult to explain or analyse material. Alongside that, I’ve spent a lot of time (some might say too much) working on standards during my time on the Forensic Science Regulator’s working group on digital evidence and as a member of BSI’s IST/033 information security group and the UK’s digital evidence rep. on ISO/IEC JTC1 SC27 WG4, where I led the work to develop ISO/IEC 27041 and 27042, and contributed to the other investigative and eDiscovery standards.
You’ve recently published some research into verification and validation in digital forensics. What was the goal of the study?
It grew out of a proposition in ISO/IEC 27041 that tool verification (i.e. evidence that a tool conforms to its specification) can be used to support method validation (i.e. showing that a particular method can be made to work in a lab). The idea of the 27041 proposal is that if tool vendors can provide evidence from their own development processes and testing, the tool users shouldn’t need to repeat that. We wanted to explore the reality of that by looking at accredited lab processes and real tools. In practice, we found that it currently won’t work because the requirement definitions for the methods don’t seem to exist and the tool vendors either can’t or won’t disclose data about their internal quality assurance.
The effect of it is that it looks like there may be a gap in the accreditation process. Rather than having a selection of methods that are known to work correctly (as we see in calibration houses, metallurgical and chemical labs etc. – where the ISO 17025 standard originated) which can be chosen to meet a specific customer requirement, we have methods which satisfy much fuzzier customer requirements which are almost always non-technical in nature because the customers are CJS practitioners who simply don’t express things in a technical way.
We’re not saying that anyone is necessarily doing anything wrong, by the way, just that we think they’ll struggle to provide evidence that they’re doing the right things in the right way.
Where do we stand with standardisation in the UK at the moment?
Standardization is a tricky word. It can mean that we all do things the same way, but I think you’re asking about progress towards compliance with the regulations. In that respect, it looks like we’re on the way. It’s slower than the regulator would like. However, our research at York suggests that even the accreditations awarded so far may not be quite as good as they could be. They probably satisfy the letter of the regulator’s documents, but not the spirit of the underlying standard. The technical correctness evidence is missing.
ISO 17025 has faced a lot of controversy since it has been rolled out as the standard for digital forensics in the UK. Could you briefly outline the main reasons why?
Most of the controversy is around cost and complexity. With accreditation costing upwards of £10k for even a small lab, it makes big holes in budgets. For the private sector, where turnover for a small lab can be under £100k per annum, that’s a huge issue. The cost has to be passed on. Then there’s the time and disruption involved in producing the necessary documents, and then maintaining them and providing evidence that they’re being followed for each and every examination.
A lot of that criticism is justified, but adoption of any standard also creates an opportunity to take a step back and review what’s going on in the lab. It’s a chance to find a better way to do things and improve confidence in what you’re doing.
In your opinion, what is the biggest stumbling block either for ISO 17025 specifically, or for standardizing digital forensics in general?
Two things – as our research suggests, the lack of requirements makes the whole verification and validation process harder, and there’s the confusion about exactly what validation means. In ISO terms, it’s proof that you can make a process work for you and your customers. People still seem to think it’s about proving that tools are correct. Even a broken tool can be used in a valid process, if the process accounts for the errors the tool makes.
I guess I’ve had the benefit of seeing how standards are produced and learning how to use the ISO online browsing platform to find the definitions that apply. Standards writers are a lot like Humpty Dumpty. When we use a word it means exactly what we choose it to mean. Is there a way to properly standardise tools and methods in digital forensics?
It’s not just a UK problem – it’s global. There’s an opportunity for the industry to review the situation, now, and create its own set of standard requirements for methods. If these are used correctly, we can tell the tool makers what we need from them and enable proper objective testing to show that the tools are doing what we need them to. They’ll also allow us to devise proper tests for methods to show that they really are valid, and to learn where the boundaries of those methods are.
Your study also looked at some existing projects in the area: can you tell us about some of these? Do any of them present a potential solution?
NIST and SWGDE both have projects in this space, but specifically looking at tool testing. The guidance and methods look sound, but they have some limitations. Firstly, because they’re only testing tools, they don’t address some of the wider non-technical requirements that we need to satisfy in methods (things like legal considerations, specific local operational constraints etc.).
Secondly, the NIST project in particular lacks a bit of transparency about how they’re establishing requirements and choosing which functions to test. If the industry worked together we could provide some more guidance to help them deal with the most common or highest priority functions.
Both projects, however, could serve as a good foundation for further work and I’d love to see them participating in a community project around requirements definition, test development and sharing of validation information.
Is there anything else you’d like to share about the results?
We need to get away from thinking solely in terms of customer requirements and method scope. These concepts work in other disciplines because there’s a solid base of fundamental science behind the methods. Digital forensics relies on reverse-engineering and trying to understand the mind of a developer in order to work out how extract and interpret data. That means we have a potentially higher burden of proof for any method we develop. We also need to remember that we deal with a rate of change caused by human ingenuity and marketing, instead of evolution.
Things move pretty fast in DF, if we don’t stop and look at what we’re doing once in a while, we’ll miss something important.
Read Angus Marshall’s paper on requirements in digital forensics method definition here.
The hottest topic in digital forensics at the moment, standardisation is on the tip of everyone’s tongues. Following various think pieces on the subject and a plethora of meetings at conferences, I spoke to Angus Marshall about his latest paper and what he thinks the future holds for this area of the industry. You can […]