pkgdown/extra-head.html

Skip to contents

makeItemsScale() generates a random dataframe of scale items based on a predefined summated scale

Usage

makeItemsScale(
  scale,
  lowerbound,
  upperbound,
  items,
  alpha = 0.8,
  summated = TRUE
)

Arguments

scale

(int) a vector or dataframe of the summated rating scale. Should range from \(lowerbound \times items\) to \(upperbound \times items\)

lowerbound

(int) lower bound of the scale item (example: '1' in a '1' to '5' rating)

upperbound

(int) upper bound of the scale item (example: '5' in a '1' to '5' rating)

items

(positive, int) k, or number of columns to generate

alpha

(posiitve, real) desired Cronbach's Alpha for the new dataframe of items. Default = '0.8'.

See @details for further information on the alpha parameter

summated

(logical) If TRUE, the scale is treated as a summed score (e.g., 4-20 for four 5-point items). If FALSE, it is treated as an averaged score (e.g., 1-5 in 0.25 increments). Default = TRUE.

Value

a dataframe with 'items' columns and 'length(scale)' rows

Details

The makeItemsScale() function reconstructs individual Likert-style item responses from a vector of scale scores while approximating a desired Cronbach's alpha.

The algorithm works in three stages. First, all possible combinations of item responses within the specified bounds are generated. For each candidate combination, the dispersion of item values is calculated and used as a proxy for the similarity between items. Combinations with low dispersion represent more homogeneous item responses and therefore imply stronger inter-item correlations.

Second, the requested Cronbach's alpha is converted to the corresponding average inter-item correlation using the identity

$$\bar r = \alpha / (k - \alpha (k-1))$$

where \(k\) is the number of items. Candidate item combinations are then ranked according to how closely their dispersion matches the similarity implied by this target correlation.

Third, for each scale score in the input vector, the algorithm selects the candidate combination whose item values sum to the required scale value and whose dispersion best matches the target correlation structure. The selected values are randomly permuted across item positions, and a final optimisation step rearranges item values within rows to improve the overall correlation structure while preserving each row sum.

This approach produces datasets whose observed Cronbach's alpha closely matches the requested value while respecting the discrete nature of Likert response scales and the constraint that item values must sum to the supplied scale scores.

Extremely high reliability values may be difficult to achieve when the number of items is very small or when the response scale has few categories. In such cases the discreteness of the response scale places an upper bound on the achievable inter-item correlation.

Examples


## define parameters
k <- 4
lower <- 1
upper <- 5

## scale properties
n <- 64
mean <- 3.0
sd <- 0.85

## create scale
set.seed(42)
meanScale <- lfast(
  n = n, mean = mean, sd = sd,
  lowerbound = lower, upperbound = upper,
  items = k
)
#> best solution in 2841 iterations

## create new items
newItems1 <- makeItemsScale(
  scale = meanScale,
  lowerbound = lower, upperbound = upper,
  items = k, summated = FALSE
)
#> rearrange 4 values within each of 64 rows
#> Complete!
#> desired Cronbach's alpha = 0.8 (achieved alpha = 0.8006)

### test new items
# str(newItems1)
# alpha(data = newItems1) |> round(2)

summatedScale <- meanScale * k

newItems2 <- makeItemsScale(
  scale = summatedScale,
  lowerbound = lower, upperbound = upper,
  items = k
)
#> rearrange 4 values within each of 64 rows
#> Complete!
#> desired Cronbach's alpha = 0.8 (achieved alpha = 0.7996)