Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • tShow transcript (+SHIFT = all transcript)
  • nShow notes (+SHIFT = all notes)

Please be cautious in using the transcripts.

They were created mechanically and have mostly not been checked or revised.

Here is how they were created:

  1. live lecture recorded;
  2. machine transcription of live recording;
  3. ask LLM to clean up transcript, and link to individual slides.

This is an error-prone process.

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

Evidence for Dual Process Theories

insert-transcript#af2aa977-1810-42b8-a3f0-b42e635484d7-here

1. Ethical judgements are explained by a dual-process theory, which distinguishes faster from slower processes.

2. Faster processes are unreliable in unfamiliar* situations.

3. Therefore, we should not rely on faster process in unfamiliar* situations.

4. When philosophers rely on not-justified-inferentially premises, they are relying on faster processes.

5. The moral scenarios and principles philosophers consider involve unfamiliar* situations.

6. Therefore, not-justified-inferentially premises about particular moral scenarios, and debatable principles, cannot be used in ethical arguments where the aim is knowledge.

insert-transcript#b3be9d51-9790-4840-b72f-67333a049db9-here

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

One process makes fewer demands on scarce cognitive resources than the other.

(Terminology: fast vs slow)

insert-transcript#e04a5021-6d8a-4905-ad9a-128e1fb99103-here

How to evaluate a theory

1. Never trust a philosopher.

2. How good is the evidence?

a. Has the theory featured in a review? If so, does the review broadly support the theory’s main claims?

b. Is there a variety of studies, from different labs, using different methods, which support the theory’s various predictions?

c. Are there studies which falsify the theory’s predictions?

Previously we only talked about doing this for specific studies, not theories; but the idea is clear enough
insert-transcript#c29a2db4-8b78-4ef7-afd8-31753dfeb206-here
I am ignoring the neuroscience, which could eventually be important but which I do not find terribly persuasive as things stand.

Evidence Greene (2014) cites includes:

  • Suter & Hertwig (2011)
  • Trémolière & Bonnefon (2014)
  • Conway & Gawronski (2013)
insert-transcript#3202f12d-76cf-41bf-a57a-a3baca2b02b4-here
insert-transcript#eb40cb8c-02c7-4442-9465-f18beedb9fa6-here

aux. hypothesis: The slow process is responsible for characteristically consequentialist responses; the fast for other responses.

Prediction 1: Limiting the time available to make a decision will reduce consequentialist responses.

insert-transcript#50b91bef-14c3-49e2-83c4-119b1fb55bfc-here

Suter & Hertwig, 2011 figure 1

caption: ‘Fig. 1. Average proportion of deontological responses separately for conditions and type of moral dilemma (high- versus low-conflict personal and impersonal dilemmas) with data combined across the fast (i.e., time-pressure and self-paced-intuition) and slow conditions (no-time-pressure and self-paced-deliberation) in Experiments 1 and 2, respectively. Error bars represent standard errors. Only responses to high-conflict dilemmas differed significantly between the conditions’
‘we selected 10 moral dilemmas (five personal and five impersonal ones) that about equally likely gave rise to both yes and no responses, respectively’ (Suter & Hertwig, 2011, p. 465).
insert-transcript#fb3c9563-87ae-4dac-8fdd-6486f3521000-here

Suter & Hertwig, 2011 figure 1

What does low conflict mean?
‘We distinguished personal dilemmas according to whether harm caused was depicted as a side-effect (low-conflict) or as intended as the means to an end (high-conflict)’ (Suter & Hertwig, 2011, p. 465).
insert-transcript#4a708475-cba9-452c-a2c7-5ac7698ad659-here

Suter & Hertwig, 2011 figure 1

[Ignore the impersonal, but if you want to know ...]
‘a distinction between personal and impersonal moral dilemmas, whereby the crucial difference between the two lies in the emotional response that they trigger or fail to trigger and in the associated judgments. Personal dilemmas—for instance, picture an overcrowded lifeboat in which a crew member throws someone out to keep it afloat—tend to engage emotional processing to a greater extent than dilemmas necessitating less personal agency (e.g., the crew member only needs to hit a switch to remove the passenger).’ (Suter & Hertwig, 2011, p. 455).

‘participants in the time-pressure condition, relative to the no-time-pressure condition, were more likely to give ‘‘no’’ responses in high-conflict dilemmas’

(Suter & Hertwig, 2011, p. 456).
insert-transcript#79d2038f-1dd0-46bf-abda-7cbb40bdf866-here

Good start.

But are there other studies?

insert-transcript#b43790d5-efd6-423d-b1bf-0033c5807874-here

Evidence Greene (2014) cites includes:

  • Suter & Hertwig (2011)
  • Trémolière & Bonnefon (2014)
  • Conway & Gawronski (2013)
insert-transcript#35ccfa2a-d6bc-4666-bcfe-5c0223f7a93f-here
another time pressure study
everything exactly as before
And you saw this study last week

aux. hypothesis: The slow process is responsible for characteristically consequentialist responses; the fast for other responses.

Prediction 1: Limiting the time available to make a decision will reduce consequentialist responses.

insert-transcript#d5699a1e-e5b2-4815-b31a-f5f3bf5db327-here
time pressure study

Trémolière and Bonnefon, 2014 figure 4

‘The model detected a significant effect of time pressure, p = .03 (see Table 1), suggesting that the slope of utilitarian responses was steeper for participants under time pressure. As is visually clear in Figure 4, participants under time pressure gave less utilitarian responses than control par- ticipants to scenarios featuring low kill–save ratios, but reached the same rates of utilitarian responses for the highest kill–save ratios.’ (Trémolière & Bonnefon, 2014, p. 927)
\textbf{*todo*} [save for later, more drama: [also mention (Gawronski, Conway, Armstrong, Friesdorf, & Hütter, 2018) p.~1006 ‘reinterpreation’ and p.~992 descriptive vs mechanistsic]] Gawronski & Beer (2017, p. 669) argue for an alternative interpretation: The central findings of Trémolière & Bonnefon (2014) ‘show that outcomes did influence moral judgments, but only when participants were under cognitive load or time pressure (i.e., the white bars do not significantly differ from the gray bars within the low load and no time pressure condi- tions, but they do significantly differ within the high load and time pressure conditions). Thus, a more appro- priate interpretation of these data is that cognitive load and time pressure increased utilitarian responding, which stands in stark contrast to the widespread assumption that utilitarian judgments are the result of effortful cognitive processes (Greene et al., 2008; Suter & Hertwig, 2011).
insert-transcript#2e4c5323-3520-42ce-8711-1c2692eba917-here

Evidence Greene (2014) cites includes:

  • Suter & Hertwig (2011)
  • Trémolière & Bonnefon (2014)
  • Conway & Gawronski (2013)
insert-transcript#c3a11373-411f-408e-a43d-ce3437d1b715-here
insert-transcript#6df098db-7d06-4c10-8461-e51279dc49f1-here

aux. hypothesis: The slow process is responsible for characteristically consequentialist responses; the fast for other responses.

Prediction 3: higher cognitive load will reduce the dominance of the more outcome-sensitive process.

insert-transcript#f61a6152-0d50-4c41-860a-a12d54045d48-here
[Might need to watch the recording if you want to know about this.]

Conway & Gawronsky 2013, figure 1

Note that if we just provide ‘incongruent’ dilemmas, we cannot distinguish all the different possibilities.
This is why varying the outcomes (which we do between congruent and incongruent dilemmas) is so important.
‘incongruent dilemma’ : kill one to save five lives (consequentialism says yes, deontology says no)
‘congruent dilemma’ : kill one to prevent a paint bomb from going off (consequentialism and deontology both say no)
insert-transcript#3ae66184-e998-48cc-a9ba-dcc9304dc7ca-here

Conway & Gawronsky 2013, figure 3

insert-transcript#aa0047e9-8d90-4aa0-88a1-8964a1cf4b5f-here

Evidence Greene (2014) cites includes:

  • Suter & Hertwig (2011)
  • Trémolière & Bonnefon (2014)
  • Conway & Gawronski (2013)
insert-transcript#06b21333-af04-46bf-8a02-f8511f0e940e-here

1. Ethical judgements are explained by a dual-process theory, which distinguishes faster from slower processes.

2. Faster processes are unreliable in unfamiliar* situations.

3. Therefore, we should not rely on faster process in unfamiliar* situations.

4. When philosophers rely on not-justified-inferentially premises, they are relying on faster processes.

5. The moral scenarios and principles philosophers consider involve unfamiliar* situations.

6. Therefore, not-justified-inferentially premises about particular moral scenarios, and debatable principles, cannot be used in ethical arguments where the aim is knowledge.

insert-transcript#80fab1c5-b58e-4fa2-a8f1-068032c769c7-here

How to evaluate a theory

1. Never trust a philosopher.

2. How good is the evidence?

a. Has the theory featured in a review? If so, does the review broadly support the theory’s main claims?

b. Is there a variety of studies, from different labs, using different methods, which support the theory’s various predictions?

c. Are there studies which falsify the theory’s predictions?

Check you are happy with the evidence for the theory.