On Sat, Nov 12, 2011 at 10:42 AM, Wojtek S <wsoko52 at gmail.com> wrote:
> I presume these were mail surveys. Mail surveys are cheap to
> administer (a pro) but there so no way to control who will answer them
> in an organization and how they interpret questions. We do web based
> organizational surveys targeting nonprofits and face these problems
> all the time. AFAIK, Gallup does that all the time.
>
> My own professional opinion on the subject is that validity depends on
> the subject of the survey. If you ask them matter of fact questions,
> such as personal or financial resources designated to particular
> projects, chances are that whoever received these surveys in an
> organization will forward them to the person responsible for these
> projects or to accountants doing their books (who are often outside
> consulting firms), and the chances are you will get either reasonably
> accurate answers or no answers at all if answering your questions
> proves too cumbersome. These guys are usually busy and have no
> incentive to lie - they will either answer your questions if they can
> do it easily or pass. So one one way of testing for a potential bias
> in your responses is to examine non-response and see if there is any
> systematic difference between respondents and non-respondents.
>
> On the other hand, if you ask general opinion questions chances are
> you will get generic PR-correct responses no matter who answers your
> survey - but you can reasonably expect that without having a survey.
> Asking for opinions of corporate men or women is a very tricky
> business and requires very skilled interviewers. If you rely on mail
> surveys, you may as well spare yourself the trouble and red the
> corporate propaganda instead - same answers but at a much lower cost.
>
> In any case, depending on the size of your sample I would run a
> comparison of answers to questions that you suspect may be affected by
> the respondent rule (i,e, who answers the questionnaire - questions
> answered by consultants may be considered "proxy responses" so you may
> want to look for that term too) to see if there are any systematic
> differences. Respondent rule is a very tricky thing: sometimes it
> introduces a bias, sometimes it does not. I would look for literature
> discussing effects of respondent rule on the particular type of survey
> that is of interest of you - if you can find any. As I already said,
> I do not think that proxy answers introduce a systematic bias in
> organizational surveys as long as the answers are produced by sourcing
> written records, but this may vary depending what questions you ask in
> your survey.
>
> I hope this is useful.
>
> Wojtek
>
>
>
>
>
>
>
> On Sat, Nov 12, 2011 at 12:04 PM, Gar Lipow <gar.lipow at gmail.com> wrote:
>> I was sent a study recently that consisted of firms who had taken
>> money for projects which they claimed would meet certain goals
>> voluntarily filling out a survey rating how well the completed
>> projects achieved those goals. The forms to obtain the money were
>> filled out by consultants, while the forms were (probably) filled out
>> by firm internal staff. (There was nothing that actually prevented
>> them using the same consultants to fill out the forms.) Is this a
>> valid methodology in sociology and other social sciences? I know that
>> for certain kinds of in-depth studies on technical issues voluntary
>> surveys are sometimes used, but is it still valid when there is such a
>> strong incentive to claim success? Every firm taking part has hopes of
>> getting even more funding in the future.
>>
>> --
>> Facebook: Gar Lipow Twitter: GarLipow
>> Grist Blog: http://www.grist.org/member/1598
>> Static page: http://www.nohairshirts.com
>>
>> ___________________________________
>> http://mailman.lbo-talk.org/mailman/listinfo/lbo-talk
>>
>
> ___________________________________
> http://mailman.lbo-talk.org/mailman/listinfo/lbo-talk
>
-- Facebook: Gar Lipow Twitter: GarLipow Grist Blog: http://www.grist.org/member/1598 Static page: http://www.nohairshirts.com